CONFESSIONS / Final Project Deliverables (Including processes)

‘Confessions’ is about inflating interstice thoughts (mental interstice) into a physical space. The experience starts with a person traversing through an interstice place. An example would be through a corridor, where people do not usually spend a lot of time in, much less thinking deeply as they travel across these places. We will have a small enclosed booth at the ‘corridor’ where they can enter to input their own random thoughts into the computer placed after being prompted by a thought-evoking poetry which will be displayed on the screen. To prompt the people passing by to enter our small enclosed booth, we will entice them with the aesthetics of/in it – the black coverage making it mysterious and hence will trigger their curiosity, the usage of white balloons, and blankets and cushions inside the enclosed booth for their comfort. The fairy lights will also make the atmosphere dreamy. Participants will be given a pair of headphones so as to listen to atmospheric music for the entire experience.

Participants can then continue their journey out of the enclosed space, eventually arriving in another room outside an enclosed tunnel (which they have been travelling through) with a balloon-structured screen. Here, their thoughts are projected onto these balloon screens.

The participant can choose to shoot and destroy their words if they want to, using a NERF gun to shoot the balloon where the sentence is at. Successful destroying off the balloon would cause the sentence or word to disappear after a set timer which will be displayed on the screen. They can also shoot thoughts that do not belong to them/were not written by them. However, they are also free to leave their thoughts on the screen as well, and whether it disappears or not afterwards will then be left to the jurisdiction of the next participant. 

CONCEPT

Our interactive installation is a representation of mental Interstice; fleeting interstice thoughts that happen in fleeting interstice spaces which are usually discarded and forgotten quickly. Our project not only makes the thoughts more tangible, but it also makes the whole process, from thought creation to deletion, more tangible by lengthening the process and making the process bigger through the use of technology. The mind’s subconsciousness lies in that space between concentration periods, and this subconsciousness is our innate selves. By extracting these innate thoughts that usually lie unbothered at the back of our heads, we are essentially forcing people to think about these thoughts. The first one will induce participants to bring these thoughts to the forefront of their consciousness through poetry and comfort. The projection room where the intersticed-mind is being projected into the room itself, expanded and inflated. This room represents the end process where thoughts are forgotten. We make this part of the process more tangible through the use of a big action required for the participants to delete their thoughts – hence them having to shoot those sentences/words with a nerf gun.

We referenced this to the work of Candy Chang’s Confessions. It’s almost similar: explores public rituals for catharsis, consolation, and intimacy. “Inspired by Catholicism and Japanese Shinto shrine prayer walls, Chang invited people to write and submit a confession on a wooden plaque in the privacy of confession booths.”. Then these confessions will be displayed in the room.

Confessions in the room placed outside the confession booths. Image obtained by http://candychang.com/work/confessions/

The difference is, her work only involves people writing them in a small booth so that it serves as their secret and safe space. Whereas our work involves more technical work such as coding and thinking of pulley systems to make the balloons pop according to the words our participants want to destroy. It involves typing thoughts onto the computer and not being able to go ‘backspace’ on your thoughts because that’s the whole point of it: your thoughts are usually jumbled up and what you type can sometimes be what you initially thought of without even realising (your subconscious thoughts as well) hence documenting that – the bit of interstice.

Based on the Week 9’s Characteristics of Interfaces, our interactive project falls under the nearing end of “interactive”. (based on the chart below)

We have high interactivity because of the feedback the computer receives when one types their thoughts and then shown to a big screen. Hence there is immediate visibility. And it is basically a form of communication between the person and the ‘cyber cloud’. Words will also destroy itself when it needs to based on the timer and whether the participant wants it to and that’s when it gets more interactive because of the physical aspect of our work. This falls under creativity because tools like the nerf-gun serve as a destructive force from the participant once they ‘pull the trigger’, seen below.

Then the pulley system attached to the balloon (seen below),

contributes to how the words could be deleted – using a timer and photoresistors. Using this combination, only when two seconds remain and the photoresistors read as low, the words were able to be deleted effectively. With the pulley system in place, it allows each box covering the photoresistors to close when their respective balloons were popped. There’s also a structure where the participants have to move from one place to another, so it gets them involved in the whole interstice process.

In terms of technicalities, there was a lot of time taken to get the processing sorted.

Programming aspects:

Programming wise, there were 2 things we had to do.

  1. Transfer messages from 1 computer to the other
  2. Link 1 computer to the Arduino and delete messages based on light received by the photoresistor

For the first task, we used Processing’s ability to create a client and a server. And be able to send data through ports.

One computer will be designated as the server and be tasked to send data through 2 ports.

 

1 port will send the “messages” data which controls the messages being shown on screen and the second port sends “entervalue” which ensures that messages are only being sent once when the enter key is being pressed.

 

The second computer will be designated as the client. The client reads the data from the designated server computer

The server computer then translates the data into 2 variables. A string variable which are the messages and an integer variable that ensures that messages are only read once when the enter key from the server computer is pressed.

The client computer then takes the msg data, puts it in 3 different array fonts and displays these messages on the screen.

As for the second task we first have to programme the Arduino so that it can send data to Processing.

The data here is being sent in “one line” to the processing, but we are separating it using commas such that it would be easier to read by processing.

Next, we have to programme the client computer such that it can read this data.

To do this we use Processing’s serial library.

To test this out whether the words can be read in processing we created another simple programme with 3 circles whose size is determined by the amount of light received by the Arduino. (seen below)

Processing then translates the data into an array and we can translate the data within the slots in the array into variables. Here we have 3 variables for the 3 different photoresistors.

Lastly, the messages are deleted when there is low light. More variables are included such the messages are only deleted once when light is low.

The videos below are some processes of the technical bits.

( testing sending data from Arduino with multiple resistors to processing)

(testing sending data from Arduino with 1 resistor to processing)

Physical setup aspect:

  1. The balloons and the black ‘fort’ where they actually enter a small narrow space and into a comfortable space for them to relax and type their thoughts, and leave through another narrow space 
  2. Inside the black fort: the comfy space 

Roles Distribution:

Shah: Processing Code, Arduino Code, Setup, Deliverables

Joey: Arduino Code, Arduino Circuit, Deliverables, Setup

Tanya: Setup, Pulley system for triggering Resistor

Qistina: Setup, Deliverables, Poetry

Individual Reflections (Challenges and how you overcame them):

Shah:

Programming wise, when creating this project it was not too difficult. After discovering the two main tasks that the programme should do, it was simply finding the right tools or example codes online that will help us solve this problem. Creating a code to transfer data from one computer to the other was relatively easy as Processing already had a server-client function and the idea to use multiple ports to send different variables was pretty intuitive. Creating the code for processing to be able to read data from different photoresistors was a lot harder. We were stumped at first as it was easy enough to be able to read the data from one resistor, but we couldn’t find a way to separate the data when multiple resistors are added. After googling we managed to find a way to separate the string of data by using commas and store the data into one array first before tying the values in the array to different integers. What I had the most trouble with, however, was getting the code to work with the whole installation. When creating the code, I tested it with my own computer and it worked fine. However, when the code was first transferred to another computer, for some reason it started to be a lot more buggy and problems started to appear(for example the video was not playing properly) and when left for long periods of time, the code will start to bug too. Though these problems were solved with “duct tape” such as resetting when the code starts to bug, I think we could have mitigated these problems if we had set up the whole installation earlier and did a proper dry run which we were not able to do that due to time constraints. All in all, though, I am happy with how much we have progressed with the code and proud at how we were able to ultimately solve how to create the code ourselves. My only takeaway for coding is that you can’t solve something at that moment of time. It is best to just leave it to ferment for a while and work on it after. Sometimes it takes a day and sometimes even a week, but trust me, your brain will subconsciously your brain will work on it and you will be able to solve the problem after a while.

Tanya:

It took us a while to figure out which two variables we should use to control the deletion of words that appeared on the screen. One variable combination we tried out was with ultrasonic sensors and photoresistors. However, that proved to be ineffective as the ultrasonic sensors would lag and cause inaccuracies readings. We eventually solved the problem by swapping out the ultrasonic sensor with a timer. Using this combination, only when two seconds remain and the photoresistors read as low, the words were able to be deleted effectively.
With the concept in place, I was able to start working with the pulley system that would allow each box covering the photoresistors to close when their respective balloons were popped. The first test with the pulley system was with one string, and everything went smoothly, but I noticed that when too many strings were strung through the same eye-pin, there was a high chance of causing friction, preventing the box from closing despite the balloons being popped. To resolve this, two more eye-pins were added, each with three strings strung through.
As for the method of popping balloons, we were unable to get a whole array of weapons to throw, but I did have a nerf gun that could serve that purpose. Due to the nerf bullet’s rubber tip, I had to modify them with thumbtacks to ensure they could pop the balloons upon impact. I tested with a variety of mods, mainly to find one where the bullets weren’t too front-heavy, allowing them to be fired at a greater distance and still hit their target. Eventually, I managed to create five bullets that were lightweight and effective.
Lastly, for the setup, we initially thought of using black cloth to create the first interstice space, but, on second thought, proved to be costly. So, we got creative and used tape and black trash bags to create the walls and roof of the first space.

Qistina:

Aside from the programming, we were also having a little issue with the way we deliver our physical setup. I realised that the reason why our participants (previously during the test-run) typed out things that are not related to their deep genuine thoughts (confessions and such) was due to their inappropriate surrounding atmosphere. It wasn’t comfortable enough for them to sit and let their thoughts flow. Also, people were constantly videoing them a.k.a me (I had to for the process videos). So I knew we needed a comforting surrounding and hence the placement of soft furry blankets, cushions and fairy-lights. And soothing music. It is scientifically known that it calms people’s nerves and minds when surrounded with soft comfy things and slow soothing music so there you go. And then once we get that part settled, for them to further on think or delve into their deeper thoughts, we have to let them read something that helps guide them to do so. Hence I came up with the poetry and edited it a couple of times because I realised the previous ones done for test-run was too long for them to read, too ‘deep’ and just wasn’t the right one. They spend more time trying to understand what the poem meant and hence could not be bothered to think deeply about something they want to share. We want something simple and nice, but also makes them feel connected to the cyber-cloud to share something personal of theirs to do so. Hence we end up sticking to the latest poetry I updated which is:

“Though darkness may try to tear your mind apart,

May light ever-run like a river through your heart

Take grit, and with grace, let your thoughts leak

You deserve to breathe and let your soul speak”

It makes me feel happy to be working with a few mediums for this whole interstice installation to make it work. I learnt how exploring those different ways could enhance the level of immersion in our project. We truly explored different ways the ‘technicals’ and ‘physicals’ could come together and make something interactive and fun for everyone.

Joey:

The challenges faced ranged in all sorts of things from circuit to the programming itself. Regarding my part in the programming, I was supposed to reconcile a working photoresistor and ultrasonic sensor function on Arduino and then make it such that Processing was able to transfer instructions over to Arduino. I did manage to make the two work in unison on the Arduino. However, even after utilizing standardFirmata and serial port, I couldn’t get it to work through processing. After consultations, we realized that while the photoresistor was able to work with processing, the ultrasonic sensor would eventually result in a lag due to its sensitivity and its rapid data transfer. We were going through too many mediums (from Arduino circuit to Arduino platform to Processing Platform). In the end, we had to change our idea by making use of Tanya’s pulley system to cover the photoresistor instead and mark the deletion of words with Shah’s timer. The circuit I made was also initially not working, but in the end, it was because the photoresistor legs were too close together, resulting in a technical glitch. During the setup, we also could not portray the right conditions accordingly to what we envisioned due to logistical difficulties and a lack of time as well. Our trash bag fort, whilst comfortable, could have been more aesthetic (but the budget was an issue too). Ideally, the projector area should have been more enclosed as well, but the location we were at simply did not allow for such a setup since it was very public and not in a room setting as we previously planned for. The concept of the balloons, while conceptually well-received, was also tedious to set up and took very long to reset after each person underwent the experience. If given more time, we could have adjusted these conditions better and found a better place to conduct our installation. Processing tended to crash sometimes too if left stagnant for a period of time. However, I feel that it was a very rewarding experience overall despite the backups and hefty improvisations because the environment was not kind to us. Processing via Arduino is not an easy setup and all in my opinion (probably because I am still a novice at programming to begin with) but I think we did well in managing to achieve a high level of interactivity (for the installation to operate) and to combine mixed mediums to make the installation more immersive.

FINAL OUTCOME

Managed to get the words on the balloons.
There’s the timer placed.
Shah explaining our work.
Prof interacting with our work. This is the exit part of the small ‘fort’.
Prof interacting with our work by shooting the balloons down. 

Another participant interacting with our work. (He’s so into the shooting part)

After all that, Lei’s advice was that maybe we should not force the participant to take the headphone out and instead let them choose if they want to keep it on to keep listening to the music while shooting their words down so that it could allow them to feel a bit more cathartic. And so that it does not suddenly interfere with the flow of them feeling calm and at ease from being in the comfortable small fort.

It’s good that we learnt a lot during the processes and even in the final outcome of this project. This has been a fruitful experience for us.

We give our many thanks!

Interactive Media I – The Principles of New Media

In this essay, I will talk about the 5 principles of Interactive Media that was mentioned in “A Review of the Language of New Media” by Bradley Dinger’s, followed by my thoughts on them and how some of these 5 principles can be applied to my group project “Confessions”. This is to have a better understanding of Interactive Media art/new media and how it also relates to the process and the work my group is doing.

The 5 principles that were talked about are:

  1. Numerical Representation
  2. Modularity
  3. Automation
  4. Variability
  5. Transcoding

Numerical Representation

What I understood of numerical representation in the reading, is that it is composed of digital code hence a numerical form — altogether a new form of art. And behind achieving this new media art, there needs to be a conversion of continuous data. To simply put, using the translation of code to create a new media art. That is called digitalisation — consisting of sampling and quantization. Data is first sampled normally at regular intervals and then each sample is quantified. I see this ‘numerical representation’ being applied when our group does a bunch of coding 1) to create a space for participants to type out their thoughts 2) and also having those thoughts displayed on the screen by having another computer reading the input. Behind all this, we break it down to the sampling and quantization which I feel is similar to how the variables we put in Arduino’s Loop function is run through multiple times per second and translated depending on how we code it.

Modularity

Modularity is the fractal structure of new media, explaining that each element or even the unit that an artwork comprises in new media can be edited without affecting the essence of the artwork. It will still maintain it’s identity and independence. In contrast with traditional media, the parts of an artwork on new media, for example, an image ungrouped in Photoshop, can be accidentally deleted but not rendering the entire artwork meaningless as they can be substituted with a click of a button.

Now linking this to my group’s project, the first aspect is 1) the words displayed can be easily edited, similarly to the aesthetics of it (the background behind the words, how the words could be shot down, etc). Depending on what we code, we can change all that. And that is only the programming aspect of our project. The physical part involves the changing up of aesthetics in the two rooms we are using and doing so without affecting the code. However, when it comes to the coding aspect, I can’t help but question whether it could be debatable because when one takes out or forgets a simple instruction such as “function” or “delay(1000)”, the whole new media artwork is a mess and sometimes it doesn’t even function.

To me, that’s counted as a meaningless work if changing little things like that is a problem to create a work that keeps it going (perhaps example a moving dog prototype). But maybe the whole point of this is to tell us that with new media, anything is possible with the fact that it could be edited very easily and any mistake done, can be undone.

Automation

Automation comprises of the first two principles mentioned above. It is simply the idea of an artwork being able to run by itself. A complex computer program can automatically generate 3D objects such as trees, landscapes and humans. This is seen in some Hollywood films that use Artificial Life software and we see that in movies such as Black Panther’s Afrofuturistic society and Wakanda setting (seen below).

Automation is applied to my group’s project where the code will constantly and automatically check whether there is input in the forms of messages from the participant and when there is an input, this message will be displayed.

Variability (EDITED)

New media is characterized by variability. Here we learn that a new media object can be in many different structures and it can have one or more interface to a multimedia database. This can be seen in Hypermedia structure where it specifies a set of navigation paths (connections between nodes) that potentially can be applied to any set of media objects. Another is such as a Crowd-sourcing artwork. Example: a world clock updating every minute with pictures of numbers found in environments, being uploaded by people from around the world. It is always different in different instances as it involves input from the audience to function as an artwork. Therefore in my group’s project, the artwork is constantly changing as the outcome changes depending on participant’s decision to destroy the words they have typed out and shown on screen, or they choose not to but somebody else might do it.

Transcoding (EDITED)

Transcoding is crucial in an interactive artwork (at least, to me it is). To transcode something is to translate it into another format and this can sometimes stimulate the audience’s minds, to think and reflect. Here, “cultural layer” of transcoding tells us that the audience has a new perspective on the topic of the artwork as it is being represented with technology as its medium. An example would be the creation and usage of VR (visual reality) in Sony PlayStation games so that it allows gamer(s) to have an immersive experience by having a simulated environment.

Now, this is where we see that the cultural aspect of an environment in reality, is translated into the computer aspect, seen in the simulated environment where the environment in VR looks like a replica of the environment in reality. Hence through transcoding, in the computer level, we are also able to manipulate the environment, wherein real life you can’t do so.

My group is transcoding the interstice of thoughts and representing it as space through the use of a computer, that can be interacted with. Relating back to this project under the theme of “Interstice”, everyone in the class is transcoding a form of interstice, just in different ways and each one of us as audience experience something new from each group work as well.

In conclusion, with the use of technology, it changes our work by making it a “larger” thing, amplifying the thoughts, more than what it simply is and putting it out there for everyone to see.

Interactive Media I – Singapore Night Festival (Research Critique)

Interactive artwork 1:

AQUATIC DREAM BY AUDITOIRE & LEKKER ARCHITECTS, CO-PRESENTED BY PUB, SINGAPORE’S NATIONAL WATER AGENCY

Image obtained by https://www.nightfestival.sg/nightlights/detail/aquatic-dream-by-auditoire-and-lekker-architects

Stepping into the site and immediately hearing the ambient ocean sound, it pulls you further into the artwork itself.  It’s mesmerising as you soak yourself in what feels like an underwater world, having a glimpse of life under the sea. The blue and purple hue present in the aquamarine set-up complete with a school of koi and jellyfishes hanging on top, the flying thin plastics of what resembles the seaweed and glittering corals — it further contributes to the artwork itself and makes the audience want to fully immerse in it and never leave. Playing with the illuminating ocean colours (blue, purple, orange for the koi) and ocean sounds (like the things you hear in the movie Finding Nemo where whales make sounds) does make you feel like you are in a dream under the sea. This work truly lives up to its name, an Aquatic Dream it is. 

Though, there are some things I feel that could make it better:

  • The glittering corals could have made the audiences more immersed if it circles them. Instead of just one machine installed for the breeze effect and waiting for the wind to naturally blow it, why not have 4 circling it? I think it’ll be beautiful to have it surrounding and dancing around the participants. Similarly with the bubbles. A larger amount of koi fishes above us as well so we could really feel like we are under the sea, instead of it looking merely like an artwork. If you look at the artwork, there’s a distinct barrier between the corals and koi and it felt off. I feel like if you want the audience to participate and feel fully immersed, everything should be balanced together and surrounding them.
  • I would have enjoyed the artwork even more if we, as the audience, could step into one part of the installation where there’s glass surface on the floor and actually experience being in the middle of everything surrounding us.

By far I still think this is my favourite out of the rest of the artwork/installations I have been to.

Interactive artwork 2:

ODYSSEY BY ARNAUD POTTIER & TIMOTHÉE MIRONNEAU (WB SHOW) (FR), PRESENTED BY OPPO

Image obtained by https://www.nightfestival.sg/nightlights/detail/odyssey-by-arnaud-pottier-and-timothee-mironneau-wb-show-fr

INTENSE. There was soft and slow music at the start and didn’t provide us with many projections of interesting visual effects that fully captures your attention UNTIL the music escalated. In the middle part, the music (which, I found out from one of the creators behind this, was created by them as well) got into a climax and it has wonderful space vibes as it goes along with the visuals of time and space (out in the cosmos filled with stars and planets). You get to explore the galactic wonders of the universe. That truly brought me into an immersive journey. It’s amazing to see it being projected against the Singapore Art museum walls. Makes you think how amazing what the prism of light could do. They also play with the speed of the visuals and that makes you feel like you are zooming in and travelling in time, going deep into the cosmos. It’s interesting how here, you don’t need to physically move around to experience this interactive artwork, as the visuals and lights and sounds (multi-sensory environment) have successfully managed to get everyone seated and immersed.

I like it a lot because of the concept of projection mapping, where “similar to video mapping and spatial augmented reality, is a projection technology used to turn objects, often irregularly shaped, into a display surface for video projection.” Because it is not on a flat screen and projected elsewhere on a 3D object, that makes the artwork more interesting. It shows how light can be mapped onto any surface and be made into an interactive display.

This was my first Singapore Night Festival experience. Overall, it’s amazing to see these interactive works, some I wish to make in the future!

 

Intro to Interactive Media I – Research Critique

Diving into all-time fave generative art by the famous contemporary video artist: Bill Viola

“The Passing” by Bill Viola.

Viola, placed at the centre of this personal exploration of altered time and space, represents his mortality in such forms as a glistening newborn baby, his deceased mother, and the artist himself, floating, submerged under water.

 

“Tristan’s Ascension”

When video art becomes life, death and transcendence

Looking into both these video art, a generative art in a way, they are similar in making us audiences feel immersed into the art, in a deep sense. We feel spiritually inclined or touched by the artworks.

We, our souls, feel deeply about it. He uses a total environment that envelops us, viewers, in images and sounds. The slow movement of water for example, and the droplets of sounds, the blackout room, and just you – to really get us involved, emotionally and I guess mentally.

“Everything in this room right now comes from somewhere else and is just passing through us at this moment and it will continue to live and grow. Ultimately we all come from the stars; our earth, our world, our bodies, our bones, mosquitoes are made from the stuff of the stars, literally, and it just keeps getting reconfigured. It’s a profound and very moving thing. ” Bill Viola Interview with ‘The Spirit of Things’–Radio National–Australia)

 

Do you think it’s important, as a viewer and also a creator, that we feel something so powerful, or at least feel something, after entering the interactive artwork?

That’s something to think about. But my answer to this, for sure, is a yes – it is highly important. You would want your viewers to be fully engaged with your work that when they leave that environment of the work, they leave with strong emotions, or thoughts.