This is the video documentation for our final project presentation. We were glad that it invoked responses from the people that were watching the interaction happening between the installation and its participant. Many people were taking videos and photos of the participant’s actions. This was the ideal scenario we wanted to achieve. The interaction between the installation and the participant and another form of interaction between the bystanders and the participants.
Memories of sound is an interactive sound installation that explores the relevance of sound in triggering human memories and experiences. We hope to understand and explore the connection that humans have with sound.
The users will be wearing colored bands on their hands which serve as the trigger points of the interaction. The movements of the colored bands will be detected by an external camera that is linked with the Max 7 patch. It will prompt changes in the audio output supplied to the users through the headphone. The change in the audio will allows the users to experience different sound context. We hope that this will trigger their personal interpretation of the sound available to them.
We tried testing our interactive installation at the concrete wall. Initially we used light sticks as the light source. We took screenshot of the colors of the light stick and got their values on Photoshop. We input the values into Max so it can detect the colors. However, once we were in the open area, the camera cannot detect the light stick. It was because the light source from the open affected the readings. In the end, we used colored paper instead.
It was fun testing out the installation ourselves. Through the testing we determined the optimum distance the person had to stand from the camera in order for patch to work. We also tested out different combinations of sound to understand how it will affects the user trying out the installation. We just hope it will not rain on the day of the final presentation.
The following are the screenshots for the patch we did for our interactive installation.
The following is the video documentation of us trying out the Max patch. We used light sticks and several colored materials to test out the patch.
The main issue for our patch is the webcam not detecting the colors at times. Therefore, it was quite frustrating as we were not sure at times if it was the fault of the patch or the fault of the camera. To put an end to our problem, we borrowed an external webcam that is more reliable, so the detection rate of the colors is more consistent.
For the Lozano-Hemmer’s Shadow Boxes patch, I played around with the values to create different effects for the visual output. I combined a cv.jit.centroids patch with it to add sound effect to the overall patch. The following is the documentation for the combined patch. Enjoy! 😀
For the face tracking patch, I learnt how to map the face of Will Smith on top of my face for visual output. I added the video velocity effect with cv.jit.HSflow to see how it will work out with the face tracking patch. It managed to create some beautiful and interactive effects. The following is the documentation for the combined patch. Enjoy! 😀
For the photo booth tutorial, I learned how to generate sounds of command to position my face for the webcam to take a screenshot. It was a complicated patch, and I had a hard time getting it to work. After getting the patch to work, I substituted the sounds with other sounds to create funny effects. I had fun making the patch and get it to work. The following is my documentations of my patch. Enjoy! 😀
As our project required the use of color mediums and body movements to trigger audio outputs as a form of interaction for our planned interactive installation, we tested out some patches to look for various possibilities.
Triggering of audio outputs by recognizing the position of the held object (Yellow object) on the screen. Different audio output is created when the object is at different position.
We will be creating and exploring different audios suitable for our interactive installation since we solved the issue of triggering audio output.
We feel that the audios we assembled should provide relevant information for the users to figure out the context of the space by themselves instead of informing them in the first place. We hope that this form of information output will invoke interest from the users and causes them to explore the space around them.