Interactive 2: Final Presentation Documentation

MEMORIES OF SOUND

Bao Song Yu & Zhou Yang


This is the video documentation for our final project presentation. We were glad that it invoked responses from the people that were watching the interaction happening between the installation and its participant. Many people were taking videos and photos of the participant’s actions. This was the ideal scenario we wanted to achieve. The interaction between the installation and the participant and another form of interaction between the bystanders and the participants.

Interactive 2: Final Project Documentation

MEMORIES OF SOUND

Bao Song Yu & Zhou Yang



Memories of sound is an interactive sound installation that explores the relevance of sound in triggering human memories and experiences. We hope to understand and explore the connection that humans have with sound.

The users will be wearing colored bands on their hands which serve as the trigger points of the interaction. The movements of the colored bands will be detected by an external camera that is linked with the Max 7 patch. It will prompt changes in the audio output supplied to the users through the headphone. The change in the audio will allows the users to experience different sound context. We hope that this will trigger their personal interpretation of the sound available to them.

Interactive 2: Documentation and Progress VII

MEMORIES OF SOUND

Bao Song Yu & Zhou Yang


We tried testing our interactive installation at the concrete wall. Initially we used light sticks as the light source. We took screenshot of the colors of the light stick and got their values on Photoshop. We input the values into Max so it can detect the colors. However, once we were in the open area, the camera cannot detect the light stick. It was because the light source from the open affected the readings. In the end, we used colored paper instead. 

It was fun testing out the installation ourselves. Through the testing we determined the optimum distance the person had to stand from the camera in order for patch to work. We also tested out different combinations of sound to understand how it will affects the user trying out the installation. We just hope it will not rain on the day of the final presentation. 

Max Assignment 5: Lozano-Hemmer’s Shadow Boxes + cv.jit.centroids Documentation

For the Lozano-Hemmer’s Shadow Boxes patch, I played around with the values to create different effects for the visual output. I combined a cv.jit.centroids patch with it to add sound effect to the overall patch. The following is the documentation for the combined patch. Enjoy! 😀

The cv.jit.centroids patch can be found here:

https://www.youtube.com/watch?v=o4F34FL8BN4

Max Assignment 4: Face Tracker Video Documentation (With Video Velocity)

For the face tracking patch, I learnt how to map the face of Will Smith on top of my face for visual output. I added the video velocity effect with cv.jit.HSflow to see how it will work out with the face tracking patch. It managed to create some beautiful  and interactive effects. The following is the documentation for the combined patch. Enjoy! 😀

The patch for the effect can be found here:

https://www.youtube.com/watch?v=qBOwZVyQG7o

Max Assignment 3: Photo Booth Documentation

For the photo booth tutorial, I learned how to generate sounds of command to position my face for the webcam to take a screenshot. It was a complicated patch, and I had a hard time getting it to work. After getting the patch to work, I substituted the sounds with other sounds to create funny effects. I had fun making the patch and get it to work. The following is my documentations of my patch. Enjoy! 😀

Max Assignment 2: Tracker

The project


This max assignment aims to allow us to experience the technical steps involves in creating an eye tracking set up. A video is needed to be pre-recorded and imported onto Max. Using the functions of Max, a patch will be created to match the value of the face received through the webcam with the frames of the video. Through matching the values and the frames, the motion of the face will trigger the part of the video that corresponds to it. 

one

seven

two

three

four

five

The challenges and problems


It took me a while to understand the logic behind the idea of matching values of the face with the frames of the videos. However, the patch that I created is still technically lacking in many aspects. It is still affected by many conditions that my patch did not take into account. Very often, the motions of the face did not correspond to the ones in the video. The video also continued to loop when there was no face in front of the screen. The presence of multiple faces in front of the screen also affected the accuracy of the tracking. 

eight

nine

eight

nine

The reflection


Through building up my patch for this assignment, I learned much more valuable technical knowledge. It was interesting in trying to figure out the values and its connections with the frames of the videos. The whole process of trying to get the set up to work was intriguing and satisfying. I start to slowly see the connection between the different commands and functions though there are still much more to learn. These assignments served as great exposures for me to understand and appreciate the capabilities of Max.


The documentation video for this assignment will be up soon. I am currently having some issues with premier pro. Such is life.