Final Life Update: Hear/Here (Colours of the Wind)

Video:

Short Essay:

This project revolves around the idea of the gaps between noise/sound, hence we created a portable device that will sample the overall surrounding sound and in response would light an LED in a corresponding colour. The colour is based on a calculation where ‘red’ is volume , ‘green’ is pitch (regardless of octave) and ‘blue’ is pitch (exact octave). Red and Blue were scaled to fit a range of 0 to 255, however, for the Green there were 5 ranges created, skewed accordingly so that the range for a humanly possible pitch is larger then a not humanly possible pitch. The code makes use of an array to store data in each pixel, until all nine pixels have been used up, then the information would be overwritten for the following pixel.

References for the code:

  • Origin of basic-ass code (which is no longer here): https://www.teachmemicro.com/arduino-microphone/
  • Origin of getAmplitude code: https://learn.adafruit.com/adafruit-microphone-amplifier-breakout/measuring-sound-levels
  •  Origin of getFrequensea code: https://www.norwegiancreations.com/2017/08/what-is-fft-and-how-can-you-implement-it-on-an-arduino/
  • Origin of NeoPixel code: https://learn.adafruit.com/adafruit-neopixel-uberguide/arduino-library-use

 

Our work takes reference to works like ‘Pulse Index’ by Rafael Lozano. It is similar in the sense that it takes record of the viewers in put, in their case the thumbprints, in our case sound, and record it on a visual plane to show the changes overtime.

Rafael Lozano-Hemmer, "Pulse Index", 2010. "Time Lapse", Site Santa Fe, New Mexico, 2012. Photo by : Kate Russel

Characteristics of Interface:

Classification of interface:

Our project falls under ‘User is one of Many’ and ‘User is valued’. Our project values the unity of the environmental sound and how your sound is captured in this collective and you cant discern what is your sound and what is the environment, hence the user is one of many part. However, the user is valued is also present in a way that they are the anomaly that created the most change when they interact with it directly.

Characteristics of interface:

Our project falls under ‘Monitered and reflected experience’ as well as ‘Intuitive selection/results relationship’. For the former, the device is to collect the environmental sound and show a colour represnetation, hence all interatctions are copied and shown directly based on the sounds that you make. The latter is true as when you see the light changing to sound, the viewers will automatically try to interact with it to see the extent that it will change to, hence creating the result of trying to find the gaps between the sounds you make when you see the different coloured representations of each instance of sounds made.

Structure of Interface:

Based on the flow chart, our Project complies to everything except the last one ‘Linear Selection’. The first idea of open structure is seen in the way we made our device portable. The second idea of ‘Feedback provided’ is done so in the form of LED lights lit in accordance to the sound of the environment/people within the environment interacting with it. The third idea is ‘Constant elements providing continuity’, since the set up is designed to reflect the sound at every (how many seconds). Finally selections are recorded in nine LED pixels, showing 8 seconds of the recently past environmental sounds.

(Liz finally answered the question yay)

Who did what:

The coding for this project was done by En Cui and the physical fabrication of the device was put together by me (Elizabeth) (but you know in the end Liz kind of screwed up alot of the soldering and stuff and needed En Cui and Lei’s help to put them together. Thank youuu)

Process:

From the initial stage of mannually making LEDs light up by pressing the buttons whenever someone made a sound we created a circuit where the LED would light up in a certain colour according to the environmental sound.

After that we used this circuit as a a reference and moved from a single RGB LED to a strip of LED wire. That way we could create a set up where the colour of a certain period of time could be recorded and compared to the pervious period of time.

yay the LED lights up.

Measuring the length of wire for the glove.

This is where problems started surfacing on the soldering part so there was a redo. (soldering wise and circuit wise sob)

Testing out the Circuit.

Yay it’s done.

After Review:

Everyone reacted to the work as we hoped they would despite only having two participants. They crowded and tried to put in their own input by making noises around the two. Though we have coments that the feedback is not fast enough to show the exact inflection of voice as one is speaking, hence not very obvious. We forgot to mention this during the review, but the delay is also constrained by technical limitations. If we reduce the delay, we will need more LEDs to represent the same amount of time, and the Arduino memory overloads at 13 LEDs. Additionally, even at delay(0), the Arduino still cannot function fast enough to get the desired result:

As a result of the delay, our theme in this work might not be very obvious to the viewers to pick up on as a result. The eventual solution may thus be to use something with more processing power.

There are comments on how they are working very hard to satisfy the device as well. Some say that it seemed like a prop for band or choir performances, or a tool for training how to get the exact pitch.

Summary Reflection:

EC needs to actually know when it’s not possible than maybe possible.

Liz should not be so innovative. Liz is just not good with technology.

We should have thought out the final form better.

Extended Concluding thoughts (if you want to read about our woes):

En Cui’s Reflection:

Concept-wise, the challenge was that the core concept and form were not well-aligned. While we talked out several issues, there’s still the challenge of the interstice being unclear. But I think, in the end, the clarity of the message depends on how you interact with the wearable. For example, the distinction is much clearer if you experience the wearable in multiple contexts, than just one.

Regarding the code and circuit, it was actually mostly okay. While things didn’t always work, the only solution needed was to observe the problem, deduce what could be possible reasons for its occurrence, then test out my hypotheses one by one. Examples include mathematical errors and faulty wiring. I also did soldering part 2 for the microphone, and honestly the solution was just learning to recognise patterns of problems and solutions based on past mistakes, such as the solder not sticking to the iron (wiping more), or getting fingers burnt (plasters).

I also realise after a full day of reflection that I’m just incompetent at doing group work efficiently. Leaving me in charge is a generally bad idea.

Elizabeth’s Reflection:

For the most bit I felt very challenged by the project, especially since it is the first time we were using and putting together components to make a circuit. for the physical fabrication portion it was the first time I used a solder, and my circuit looked very ugly after that, and I dont really think I improved in that aspect very much even after multiple attempts 🙁 When using the Hot glue gun to insulate the exposed solder I think I made the circuit worse, because there was already a built up of solder.

Also, I did not solder the circuit down the right way apparently. You can only solder your wires to one side of the LED because they are fickle and like to have their electrical charge flowing in one direction. Also, do not solder and hot glue your circuit till you are 100% sure it works, saves you a lot of heartpain and time, (thank you Lei and En Cui for dealing with my screw ups D;).

I also made a few mistakes by piercing the LED strip’s digital pins on accident thinking I can sew it down that way. Thinking about it now, I should have known better then to try piercing any part of the components.

Speaking of computer, I feel very attacked by my own computer, since I think it has issues running the code we shared over google docs, and gave me a heart attack that I might have short circuited the only RGB LED in the starter pack, and still the circuit refused to light after I confirmed that I did not. I think there is something wrong with my computer DX. I either leave the testing for computer to En Cui or find a school computer for this (pick the right computer for this, not all computers have arduino).

If we had a bit more time and I had a bit more skill in soldering, we wish to have more LED lights to reflect the change in sound.

 

Principles of New Media

In Manovich Lev’s ‘The Language of the New Media’ he identifies new media to fall under five categories: Numerical Representation which is the language used to generate outputs in machinery; Modularity which means it has various components of which a new media work can be separated; Automation which is the removal of human intentionality in the work; Variability which means that the work can have a range of outputs/outcomes/reactions; and finally Transcoding which is the ability to turn ‘physical information’ like sound, text, etc, into a set of code that can be read by the computer.

For mine and En Cui’s project, we make use of four out of five of this categories, namely numerical representation, modularity, variability and transcoding.

When we are oconsider the idea of numerical representation, so long as we are creating things on a digital platform, our written codes for our project to function is a form of numerical representation. Numerical representation is the digital language that machinery use to communicate hence it is present in all projects that make use of technology.

Subsequently, We have the idea of modularity. Modularity is shown in various layers in thir work. It could be seen in the components that make up the body of the project, like the wires, LEDs and microphone.

It could also refer to our project’s ability to capture data at different points in time in the form of different coloured LED lights, and within that collection, create another collective image of the environment at different points in time.

A bit like Rafael Lozano-Hemmer’s work ‘Pulse Index’, we are looking at how the individual components make up a bigger collective, and how that collective changes over time, which changes the outcome of the work at each time.

Moving on to the idea of variability, the entire idea of our work revolves around the idea of different sounds, merging overlapping and melding to the point that you cannot figure out if it is your own voice or an influence of the environment. Hence the variable in our project comes in the form of the sound input and a corresponding unique coloured light output. Each output would depend on the pitch and volume of the sound recorded by the microphone. Since different people have different pitches in tone, and volume of which they speak each led at different points in time will be different.

Finally, there is the idea of Transcoding where physical information is translated into data code that can be read by the computer. In this case it is the idea of sound that has been picked up by the microphone, being converted into code that is later translated into the values of RGB and brightness that is reflected on the LED.

 

Final Project, life update

From last Week’s flow chart, En Cui and I worked on creating a mock up circuit which follows a segment of the flow chart.

Mock Up model 1

We started with a basic microphone set up.

From here we tested to see if we can get the input reading through the serial monitor of the surrounding sounds, and the changes when we spoke into the microphone.

Problems:

  • Serial Montitor showed that the input was either a ‘high’ at 1022/1023, or a ‘low’ 0.

Conclusion at this segment:

  • We thought our microphone was iffy

Nonetheless we continued, as the microphone was still able to detect sound we decided it will be good enough for now and we will solve this issue later.

Mock Up model 2

Subsequently, we added onto the first model to include the LED output.

From here the code was expanded to include a code to control an RBG LED and to read frequency and Volume of the surrounding environment. Initially, the code was done in  random way where for every 3 digits that the frequency had the digit in the hundred place would be the percentage of red, tens the percentage of blue, and ones for green, that would make up the colour that the light bulb would create.

Watch Video at:

Problems:

  • The colour of the lightblub was coming out abit too randomly

So from there we attempted to group a range of frequencies and match them to a colour. Subsequently we made it such that the volume is matched to the brightness of the LED.

Body Storming

 

After our short dicussion, En Cui and I have decided to combine the ideas of the talking door and the concept of gaps between multiple conersations to create an interactive hat. The idea of the hat was to create both a visual (Different coloured LEDs for different pitch and/or volume) and audio output whenever someone spoke. The idea was that it would let out a different sound depending on the pitch and volume it sensed from the surroundings, meaning that it will consider the environmental sound as a whole.

Watch the video here:

 

What did you learn from the process?

From this process we have learnt that our concept is hard to connect with the audience, so we should make it more diresct. Though the idea of using the object is fairly simple as it is what it is, which means the idea of a found object is really strong enough to have the audience interact with it without giving much instructions. The reactions can be learnt along the way. We should also make it such that what ever the reaction is should be within the view of the participant, as the lights are on top of the hat. Also our project is very context driven as it relies a crowded, noisy area to link to our concept of the gaps in between conversations.

What surprised you while going through the process?

Shout out to our Tester who is especially cooperative :3c. There was a lot of confusion trying to link the project to its concept, it is not directly understood as an individual or group concept, but I guess that is what happens when you only have one tester and your project responds to them whether in a group or not. The idea of the hat was for portability but we have the idea that it will react to its environment no matter if it is worn or not, this results in some comments that we might want to change the shape it takes. We are also worried about how to convince the audience whether they can grab the object freely.

How can your apply what you have discovered to the designing of your installation?

So we might consider changing the appearance of the artwork. We might tweak the message a bit, and maybe have multiple small things instead of one big thing, to make it less intimidating. Also Lei said we can use P5.js  to do speech to text, we are kind of bombarded with endless possibilities now lol.

Project Idea Development — Be Gentle with Door-chan, She is VERY Sensitive.

I had discussed with En Cui of two projects, and while she is expanding on the idea of the bilboard, I’m expanding on the idea of the talking door.

At first I had talked to Celine about a few ideas, and this one took on a concept that is very similar to hers, revolving around the idea of the door. However, this interstices revolves around the space between your hand and the door, and how you touch something.

The idea was mainly have the door react to your touch according to how you open it.

There will be a sensor attached to the handle, that would sense the vibrations along the door handle, and it would let out a response accordingly. The idea is to have the door say rather accusatory things, like “Who gave you the right to touch me!?”, mostly to give the people who touch the door a shock, let go of the door, and hopefully not enter the room at all :3

 

I Light Critique

I managed to catch a few of the interactive works when I went for I Light. The first being ‘Facey Thing’ by Uji Studios which was a sort of satirical take on the selfie culture amongst the masses in this day and age.

Fig 1. Facey Thing by Uji Studios, 2019, I Light, Singapore.
pictures screenshot from video taken by: En Cui

When you first encounter it, ‘Facey Thing’ is a bright huge screen that is twice the height of an ordinary human.

Diagram 1, mock up of Facey Thing

So the set up is simple, consisting of a screen which is hooked up to a single camera that captures the passerbys that are oving in front of the work. The code that runs this work is set to capture the faces of the people who are standing in front of it.

Fig 2. Facey Thing by Uji Studios, 2019, I Light, Singapore.
pictures screenshot from video taken by: En Cui

Fig 3. Facey Thing by Uji Studios, 2019, I Light, Singapore.
pictures screenshot from video taken by: En Cui

When your face is recognised by the screen it is boxed up as seen in fig 2 above and would later evolve to fig 3. In Fig 3, the faces of the passerbys are blown up and dragged upward almost as though painting the canvas with their face. So in this case the images on the screen are temporarily changed by the people who interact with it, if not it is no more then an ordinary close circuit video recording. It warps the initial intention of Selfies to be one that portray one self as ‘glam’ to being very ‘unglam’ instead by warping the passerby’s faces. 

Fig 4. Facey Thing by Uji Studios, 2019, I Light, Singapore.
pictures screenshot from video taken by: En Cui

The people that decided to interact it were waving their hands of moving about oddly to try and get their face recognised by the system.

Subsequently I caught “Shades of Temporality” by SWEATSHOPPE – Blake Shaw and Bruno Levy.

Fig 5. Shades of Temporality by SweatShoppe, 2019, I Light Singapore
Text: 你好 Lei <3
Written by: En Cui, Christine and Elizabeth

This work has two elements to it, the first being the visualiser projecting the ‘painted image’ on to the wall, and the second being the paint rollers.

Diagram 2, Mock up of the painting brushes used in Shades of Temporarity

Diagram 3, Mock up of the set up of Shades of Temporarity

When the button in diagram 2 is pressed the paint brush head up turns green. this is then sensed by the camera and the visualiser will send an out put of light that will corespond to the area where the paint brush touched, projecting a loop of graphical illustrations of Singapore.

In this case the audience are encouraged to make temporary graffity designs on the wall, hence creating art. the audience is given the ability to write what ever they want to express themselves in anyway they see fit.

A Critique on Interactivity

Crystal Universe by teamLab, Future World,Art Science Museum, Singapore
picture taken from: https://faithjoyhope.blogspot.com/2016/03/new-what-you-can-expect-artscience.html

 

The first art work that I chose for this critique is ‘Crystal Universe’ by teamLab. What I find intriguing about this work is the way entering the space is like entering a different realm. It is an aesthetic work, if you want to classify it, but what makes it so effective is that because it is so ‘instagram worthy’ it draws crowds of people easily. The work is described as a ‘galaxy of hanging leds’ by vice.com. True enough the LEDs light up like a galaxy of stars in the dark room, and one is allowed to travel through the path created, experiencing the LEDs from different perspectives.  The interactive element comes in two forms, the first being immersing ones self in the atmosphere created by the twinkling LEDs and the subsequent one being actually changing the LEDs programming through your smartphone. By doing so the viewers control the way the LEDs shine, creating a reflective space that suit them. As such the viewer becomes both the audience as well as the artist who shapes their experience of the space.

However, on a more realistic note, the experience of creating your own experience is not always a pleasant one, considering that the number of people that visited the space, and the lack of limitation of the people walking through the work at all times which hinders the viewers experience of the work.

PageImage-498094-4251949-girl2.png

The Treachery of Sanctuary 2012, by Chris Milk, Fort Mason Center, San Francisco
Picture taken from:http://milk.co/treachery

The second piece i chose was Chris Milk’s “The Treachery of Sanctuary”. The piece consists of three monolithic screens that are shone with light. When a person moves in front of the screens their ‘shadow’ is shone over it, triggereing the art to react. What I found interesting about this piece is how it took the image of the person and warped/disinitegrated it. The wor

The work is a representation of its own creative process, which I find hilarious.

The first panel represents “the genesis of the idea, when you finally have a breakthrough.”

The participants then notice a flock of birds above them– as they reach out, their body begins to break down and birds begin to emerge.

this represents the viewers becoming the idea behind the work, and later in the secon panel, the flock continues to rain down on their ‘shadow’ pecking what is left of the shadow. This the artists says represents the hardships faced during the project. Finally, the last panel is the triumph where you become a bird yourself.

The entire work is coded to act in a certain way, but will not be able to do so without the audience interacting with it. Hence in this situation the audience is still sort of the artist shaping what is viewed on the screen.

For this project the audience still plays a huge role in the artwork by interacting with it and creating the visual imagery. However as always the artist has created the code which limits the reaction of the work, meaning they have set the narrative and setting of the art work, everything else is free to be influenced and changed by the audience.

Questions:

  • Interactivity and aesthetics, how do you attract people to your work without prompt
  • Viewer experience and will a large audience affect the experience of the work compared to a small audience

 

References:

Crystal Universe:

  • https://www.vice.com/en_au/article/yp555v/enter-a-real-life-matrix-in-teamlabs-crystal-universe
  • https://www.teamlab.art/w/crystaluniverse/

The Treachery of Sanctuary:

  • https://www.vice.com/en_uk/article/53wbw8/chris-milks-the-treachery-of-sanctuary-unveiled-at-londons-digital-revolution
  • https://www.wired.co.uk/article/chris-milk-installation
  • http://milk.co/treachery