The lapse project is a collection of five projects, all revolving around the theme of lapses in time, in memory and in realities. Whilst not being able to physically be in the preence of these art works, reading up on it I was able gather an impression of it.
24 hour Lapse
Within the projects the last two, Panorama Lapse and Journal Lapse are not interactive, hence I will not be discussing them.
VR Lapse is a virtual reality simulation, bringing the audience to Singapore’s oldest colonial building, only to find out it is digitally erased.
Does Out of Sight, Out of Mind in Singapore leads to Nevermind?
Quoted from popspoken.com, during their interview with inter-mission shares the artists concern with how significant Art related artefacts in Singapore are slowly being washed away with the ever changing landscape.
With that message in mind, I wonder if the project works on someone with no context of the place at all. It is true that these are cultural landmarks, however I am left drawing a blank when someone tells me ‘Art House’. They were trying to trigger this idea of misplacement, the ‘I am pretty sure there is something missing here’ sort of thought, but if there was no recollection of the place in the first place, can this idea still be drawn out? Does that hinder the experience of the work.
Subsequently since we are discussing the idea of interactivity of a work, I feel that the interactivity level is quite low. Being placed on empty, unchanging landscape, with nothing to influence, is passive like watching a movie or a slide show.
The second project adds to the atmosphere of the first. Particle Lapse is more interactive in a sense that it is using the movement of the viewer and creating a feedback sound/atmosphere for the audience who is traversing the virtual world, giving them the extra dimension of sound that is meant to confuse the audience. In this case there is a contributive element that the audience plays in the artwork.
Finally there is 24 Hour Lapse which is an installation where visitors from the past 24 hours are projected alongside the present visitors on a CRT monitor. It is kind of interesting how they play with the idea of people from two differnt times sharing the same space, even if it is only a screen. However in terms of interactivity, it is again quite passive as the present visitors cannot influence the already pre-recorded video.
Overall the Lapse project is not a very interactive project. It works more as a stage which is the artists mind, and the audience is the audience, not the participants on the stage. As such we only view their feelings and experience for the idea of lapse in memory, which is not always universal hence abit hard to relate to.
A fitness tracker is a device that you wear on your wrists. It keeps track of multiple things like the number of steps you have taken, your heart rate, your location, etc (depending on the model).
In this case we will be looking at the MI Band 3, which has the ability to track:
exercise in terms of steps taken, distance moved, calories burned
sleep, whether deep or light and total sleep
heart rate, automatic or manual
The device itself has a long lasting battery, and has a quick charge function, which is very convinent as it is a device that is to be used on a day to day basis. Subsequently it is also affordable, unlike other brands which can cost up to a few hundred in the market.
It also functions as a Smart watch.
However, some reviews say that the product cannot compare to other brands, like Fitbit, in terms of competetive analsis and sharing. Also being a China brand it also not compatible with the iPhone (sorry iPhone users no xiaomi for you). Subsequently, the band has most of its functions like a seperate ‘my exercise’ function in built in the app, but not it the phone. Which makes it a bit more tedious in the sense that you have to bring your phone with you when you exercise (ah, first world problems).
Considering this, if the xiaomi wants to be more competetive in the market, the company should first make their products competible with all products (uh, easier said then done huh). Having more apps already in built in the band would also make things more convinient for the lazy consumer, or maybe have a slightly more specialised tracking system that would allow them to differentiate when the wearer is doing one activity or another.
‘Rain Room’ by Random International featured in the Museum of Modern Art, New York in 2013. It makes use of a 100 square meter room full of falling water simulating rain, and 3D tracking cameras to capture the motion of the visitors passing through the room. By doing so would stop the ‘rain’ fall above that peticular area and create a pathway for them to cross.
The work replicates the sound and the smell of the rain, creating a sort of white noise that encompasses you along with the rain. It sort of reflects this relationship between human and nature, which is subsequently getting regulated with technology. How contrary it is that people would stand and simply contemplate in this artifical downpour vs fleeing the actual downpour.
What I find particularly interesting about this project is the artist statement of creating this room. They said that they had created the room with no preconceived idea of what kind of reaction they would draw from the audience experiencing their work. In a sense that unredictability of reaction itself becomes part of the artwork.
“DON’T RUN!” exclaimed a Museum of Modern Art press rep, as a young woman who had entered the field of falling water in Rain Room, 2012, began to take flight and was promptly soaked.
As quoted from artforum.com’s review of the work, after a guest had out ran the motion sensors, temporarily glitching the system and got drenched from the work not stopping the rain for her. It is amazing how this ‘carefully chereographed downpour’ still had the ability to instill that same instincts humans have in the faces of an actual downpour in some, however bring out a contemplative peace in others.
This project revolves around the idea of the gaps between noise/sound, hence we created a portable device that will sample the overall surrounding sound and in response would light an LED in a corresponding colour. The colour is based on a calculation where ‘red’ is volume , ‘green’ is pitch (regardless of octave) and ‘blue’ is pitch (exact octave). Red and Blue were scaled to fit a range of 0 to 255, however, for the Green there were 5 ranges created, skewed accordingly so that the range for a humanly possible pitch is larger then a not humanly possible pitch. The code makes use of an array to store data in each pixel, until all nine pixels have been used up, then the information would be overwritten for the following pixel.
References for the code:
Origin of basic-ass code (which is no longer here): https://www.teachmemicro.com/arduino-microphone/
Origin of getAmplitude code: https://learn.adafruit.com/adafruit-microphone-amplifier-breakout/measuring-sound-levels
Origin of getFrequensea code: https://www.norwegiancreations.com/2017/08/what-is-fft-and-how-can-you-implement-it-on-an-arduino/
Origin of NeoPixel code: https://learn.adafruit.com/adafruit-neopixel-uberguide/arduino-library-use
Our work takes reference to works like ‘Pulse Index’ by Rafael Lozano. It is similar in the sense that it takes record of the viewers in put, in their case the thumbprints, in our case sound, and record it on a visual plane to show the changes overtime.
Characteristics of Interface:
Classification of interface:
Our project falls under ‘User is one of Many’ and ‘User is valued’. Our project values the unity of the environmental sound and how your sound is captured in this collective and you cant discern what is your sound and what is the environment, hence the user is one of many part. However, the user is valued is also present in a way that they are the anomaly that created the most change when they interact with it directly.
Characteristics of interface:
Our project falls under ‘Monitered and reflected experience’ as well as ‘Intuitive selection/results relationship’. For the former, the device is to collect the environmental sound and show a colour represnetation, hence all interatctions are copied and shown directly based on the sounds that you make. The latter is true as when you see the light changing to sound, the viewers will automatically try to interact with it to see the extent that it will change to, hence creating the result of trying to find the gaps between the sounds you make when you see the different coloured representations of each instance of sounds made.
Structure of Interface:
Based on the flow chart, our Project complies to everything except the last one ‘Linear Selection’. The first idea of open structure is seen in the way we made our device portable. The second idea of ‘Feedback provided’ is done so in the form of LED lights lit in accordance to the sound of the environment/people within the environment interacting with it. The third idea is ‘Constant elements providing continuity’, since the set up is designed to reflect the sound at every (how many seconds). Finally selections are recorded in nine LED pixels, showing 8 seconds of the recently past environmental sounds.
(Liz finally answered the question yay)
Who did what:
The coding for this project was done by En Cui and the physical fabrication of the device was put together by me (Elizabeth) (but you know in the end Liz kind of screwed up alot of the soldering and stuff and needed En Cui and Lei’s help to put them together. Thank youuu)
From the initial stage of mannually making LEDs light up by pressing the buttons whenever someone made a sound we created a circuit where the LED would light up in a certain colour according to the environmental sound.
After that we used this circuit as a a reference and moved from a single RGB LED to a strip of LED wire. That way we could create a set up where the colour of a certain period of time could be recorded and compared to the pervious period of time.
yay the LED lights up.
Measuring the length of wire for the glove.
This is where problems started surfacing on the soldering part so there was a redo. (soldering wise and circuit wise sob)
Testing out the Circuit.
Yay it’s done.
Everyone reacted to the work as we hoped they would despite only having two participants. They crowded and tried to put in their own input by making noises around the two. Though we have coments that the feedback is not fast enough to show the exact inflection of voice as one is speaking, hence not very obvious. We forgot to mention this during the review, but the delay is also constrained by technical limitations. If we reduce the delay, we will need more LEDs to represent the same amount of time, and the Arduino memory overloads at 13 LEDs. Additionally, even at delay(0), the Arduino still cannot function fast enough to get the desired result:
As a result of the delay, our theme in this work might not be very obvious to the viewers to pick up on as a result. The eventual solution may thus be to use something with more processing power.
There are comments on how they are working very hard to satisfy the device as well. Some say that it seemed like a prop for band or choir performances, or a tool for training how to get the exact pitch.
EC needs to actually know when it’s not possible than maybe possible.
Liz should not be so innovative. Liz is just not good with technology.
We should have thought out the final form better.
Extended Concluding thoughts (if you want to read about our woes):
En Cui’s Reflection:
Concept-wise, the challenge was that the core concept and form were not well-aligned. While we talked out several issues, there’s still the challenge of the interstice being unclear. But I think, in the end, the clarity of the message depends on how you interact with the wearable. For example, the distinction is much clearer if you experience the wearable in multiple contexts, than just one.
Regarding the code and circuit, it was actually mostly okay. While things didn’t always work, the only solution needed was to observe the problem, deduce what could be possible reasons for its occurrence, then test out my hypotheses one by one. Examples include mathematical errors and faulty wiring. I also did soldering part 2 for the microphone, and honestly the solution was just learning to recognise patterns of problems and solutions based on past mistakes, such as the solder not sticking to the iron (wiping more), or getting fingers burnt (plasters).
I also realise after a full day of reflection that I’m just incompetent at doing group work efficiently. Leaving me in charge is a generally bad idea.
For the most bit I felt very challenged by the project, especially since it is the first time we were using and putting together components to make a circuit. for the physical fabrication portion it was the first time I used a solder, and my circuit looked very ugly after that, and I dont really think I improved in that aspect very much even after multiple attempts 🙁 When using the Hot glue gun to insulate the exposed solder I think I made the circuit worse, because there was already a built up of solder.
Also, I did not solder the circuit down the right way apparently. You can only solder your wires to one side of the LED because they are fickle and like to have their electrical charge flowing in one direction. Also, do not solder and hot glue your circuit till you are 100% sure it works, saves you a lot of heartpain and time, (thank you Lei and En Cui for dealing with my screw ups D;).
I also made a few mistakes by piercing the LED strip’s digital pins on accident thinking I can sew it down that way. Thinking about it now, I should have known better then to try piercing any part of the components.
Speaking of computer, I feel very attacked by my own computer, since I think it has issues running the code we shared over google docs, and gave me a heart attack that I might have short circuited the only RGB LED in the starter pack, and still the circuit refused to light after I confirmed that I did not. I think there is something wrong with my computer DX. I either leave the testing for computer to En Cui or find a school computer for this (pick the right computer for this, not all computers have arduino).
If we had a bit more time and I had a bit more skill in soldering, we wish to have more LED lights to reflect the change in sound.
During our trip to the Red Dot Museum, I chanced upon Yamaha’s YEV Electronic violin.
At first I was captivated by the use of wood to hint at a violin silhouette. This is very unlike the other electronic violins that are already market that either copies the silhouette of an acoustic violin, or is completely unrecognizable as a violin all together.
The YEV violins are also described to have ‘a design that is beautiful from every angle point’ created by the slanted curvature of the wood and is said that ‘the graceful curves allow players who are accustomed to playing acoustic violins to switch effortlessly to the YEV’.
At courts I was looking at the fan section and realized that while the design for fans had evolved over the years, the old retro electronic fan look has made a comeback. Most fans seem to play around with the materials a standing fan is made of, but the GreenFan by Novita, had created a more sleek modern design by having a pad for buttons instead of switches or buttons that jut. Hence creating a smooth flat plane.
Hence I feel that on the Nodes of Influence chart, this product would score lower in emotional value, as its design is almost minimalist in nature. It should be average on the functional scale, as it does perform its purpose well, and as for the human scale it should be okay too since everyone is familiar with the symbols on the button pad in order to use it.
Subsequently whilst walking around Courts I noticed a row for Robot vacuums. I found it interesting large vacuums are now compacted into a small disk, and is now fully automated for consumer convenience. I was looking at the Samsung Robot Vacuum cyclone force as an example of a vacuum robot. I found it interesting that the design of the vacuum robot mostly took on the form of a circular disk (how do you clean corners?) and came with a remote.
In this case the product is controlled by a remote so the human factor is accounted for. Again, there is not much emotional factor added into its aesthetic as these electronics are created for function over frivolous decorating.
In Manovich Lev’s ‘The Language of the New Media’ he identifies new media to fall under five categories: Numerical Representation which is the language used to generate outputs in machinery; Modularity which means it has various components of which a new media work can be separated; Automation which is the removal of human intentionality in the work; Variability which means that the work can have a range of outputs/outcomes/reactions; and finally Transcoding which is the ability to turn ‘physical information’ like sound, text, etc, into a set of code that can be read by the computer.
For mine and En Cui’s project, we make use of four out of five of this categories, namely numerical representation, modularity, variability and transcoding.
When we are oconsider the idea of numerical representation, so long as we are creating things on a digital platform, our written codes for our project to function is a form of numerical representation. Numerical representation is the digital language that machinery use to communicate hence it is present in all projects that make use of technology.
Subsequently, We have the idea of modularity. Modularity is shown in various layers in thir work. It could be seen in the components that make up the body of the project, like the wires, LEDs and microphone.
It could also refer to our project’s ability to capture data at different points in time in the form of different coloured LED lights, and within that collection, create another collective image of the environment at different points in time.
A bit like Rafael Lozano-Hemmer’s work ‘Pulse Index’, we are looking at how the individual components make up a bigger collective, and how that collective changes over time, which changes the outcome of the work at each time.
Moving on to the idea of variability, the entire idea of our work revolves around the idea of different sounds, merging overlapping and melding to the point that you cannot figure out if it is your own voice or an influence of the environment. Hence the variable in our project comes in the form of the sound input and a corresponding unique coloured light output. Each output would depend on the pitch and volume of the sound recorded by the microphone. Since different people have different pitches in tone, and volume of which they speak each led at different points in time will be different.
Finally, there is the idea of Transcoding where physical information is translated into data code that can be read by the computer. In this case it is the idea of sound that has been picked up by the microphone, being converted into code that is later translated into the values of RGB and brightness that is reflected on the LED.
From last Week’s flow chart, En Cui and I worked on creating a mock up circuit which follows a segment of the flow chart.
Mock Up model 1
We started with a basic microphone set up.
From here we tested to see if we can get the input reading through the serial monitor of the surrounding sounds, and the changes when we spoke into the microphone.
Serial Montitor showed that the input was either a ‘high’ at 1022/1023, or a ‘low’ 0.
Conclusion at this segment:
We thought our microphone was iffy
Nonetheless we continued, as the microphone was still able to detect sound we decided it will be good enough for now and we will solve this issue later.
Mock Up model 2
Subsequently, we added onto the first model to include the LED output.
From here the code was expanded to include a code to control an RBG LED and to read frequency and Volume of the surrounding environment. Initially, the code was done in random way where for every 3 digits that the frequency had the digit in the hundred place would be the percentage of red, tens the percentage of blue, and ones for green, that would make up the colour that the light bulb would create.
Watch Video at:
The colour of the lightblub was coming out abit too randomly
So from there we attempted to group a range of frequencies and match them to a colour. Subsequently we made it such that the volume is matched to the brightness of the LED.
After our short dicussion, En Cui and I have decided to combine the ideas of the talking door and the concept of gaps between multiple conersations to create an interactive hat. The idea of the hat was to create both a visual (Different coloured LEDs for different pitch and/or volume) and audio output whenever someone spoke. The idea was that it would let out a different sound depending on the pitch and volume it sensed from the surroundings, meaning that it will consider the environmental sound as a whole.
Watch the video here:
What did you learn from the process?
From this process we have learnt that our concept is hard to connect with the audience, so we should make it more diresct. Though the idea of using the object is fairly simple as it is what it is, which means the idea of a found object is really strong enough to have the audience interact with it without giving much instructions. The reactions can be learnt along the way. We should also make it such that what ever the reaction is should be within the view of the participant, as the lights are on top of the hat. Also our project is very context driven as it relies a crowded, noisy area to link to our concept of the gaps in between conversations.
What surprised you while going through the process?
Shout out to our Tester who is especially cooperative :3c. There was a lot of confusion trying to link the project to its concept, it is not directly understood as an individual or group concept, but I guess that is what happens when you only have one tester and your project responds to them whether in a group or not. The idea of the hat was for portability but we have the idea that it will react to its environment no matter if it is worn or not, this results in some comments that we might want to change the shape it takes. We are also worried about how to convince the audience whether they can grab the object freely.
How can your apply what you have discovered to the designing of your installation?
So we might consider changing the appearance of the artwork. We might tweak the message a bit, and maybe have multiple small things instead of one big thing, to make it less intimidating. Also Lei said we can use P5.js to do speech to text, we are kind of bombarded with endless possibilities now lol.
I had discussed with En Cui of two projects, and while she is expanding on the idea of the bilboard, I’m expanding on the idea of the talking door.
At first I had talked to Celine about a few ideas, and this one took on a concept that is very similar to hers, revolving around the idea of the door. However, this interstices revolves around the space between your hand and the door, and how you touch something.
The idea was mainly have the door react to your touch according to how you open it.
There will be a sensor attached to the handle, that would sense the vibrations along the door handle, and it would let out a response accordingly. The idea is to have the door say rather accusatory things, like “Who gave you the right to touch me!?”, mostly to give the people who touch the door a shock, let go of the door, and hopefully not enter the room at all :3