Particle Rhythm seeks to explore generation of music through movement using Kinect, Ableton Live and MAX MSP. All sounds were generated live and nothing was pre-recorded.

The visuals generated with the music

The installation involved 3 computers and 2 Kinects. We use a dark room to enhance the atmosphere of the installation.

The setup of the installation

Depending on the intensity of the movement generated by the user, the sound generated and warped also varied. We had a couple of dancer friends to try out the Kinect and the results generated were rather experimental and interesting.

My friend trying out the Kinect

Ableton Live

MAX MSP

One of the major challenges definitely was the computer power which mostly could not keep up with the layers in MAX MSP, and thus the visuals were not as pleasing. Also, the sounds generated through mapping are rather raw and at times inaccurate due to the mapping of coordinates. The mapping of coordinates will definitely be something to overcome.

Overall, we have learnt quite a lot about MAX MSP and Ableton Live through this final project, and we would definitely want to take this further in the artistic level. In the future as we develop this, we would like to experiment with concept and music – maybe even pure soundscapes and no EDM track. Perhaps there could even be a thematic focus – e.g. nature.

Featuring Zack the Corgi and the imposter, via Max face changer

My sister with Zack, this was an amusing moment

 


 

 

 

 

 

 

 

Things to improve on:

  • Having Corgi face bigger and fitted into the human face
  • At times the detector will glitch when multiple faces are inside. For future development, will be something interesting to have face tracker with multiple faces inside, something like a Snapchat filter

 

 

 

 

 

 

 

We had to create a photobooth in Max whereby the user’s face needs to be positioned in the middle of the webcam – using instructions like move left/move right/move up/move down. It was an interesting exercise as when I had my friends to try this thing being created, they had amusing reactions. It was great that they had fun (and confused)!

Initial testing of the interface using sample sounds

When the face is center, all sounds will stop playing in which we know it is fit to take a photo.

More finalized when all the parameters has been set up

 

Difficulties faced doing this assignment would definitely be gauging the position of the center, its parameters (how much has the face moved before an instruction has to come in). Currently, I was gauging the center by my human eye which was bound to have parallax error, in which a calculation would be a smoother method. Also, the distance of the face relatively to the webcam was not heavily accounted for as well.

Also, the sound would not sound as smooth in its repeat, setting up a better delaly would be good. These are areas I would strive to improve on.

 

Max Assignment 1: Mirror

A fun documentation on using this Max Mirror with friends.

(Side fact: The documentation was taken during a Birthday dinner and thus the spontaneity at the ending 😉 )

Our task was to create a mirror that dims when we go closer to it. Sensing was done with the webcam, detecting the face (cv.jit faces) and the distance of it from the webcam. Effecting was done via the dimming of the screen as the faces gets closer.

screen-shot-2017-02-06-at-12-18-24-pm

Screenshot of the Max file

This exercise was my first time trying out MAX MSP. It was quite confusing at times but it helped me gather a better understanding regarding the subject. What I understood about MAX was the concept of “sending messages”, nodes that make it more straightforward in the input/output. There are definitely much to still learn about this.

sequence-01-00_00_16_06-still002

Example of interacting with the Mirror

There were some limitations that I got stuck with. As seen from my documentation video, the screen would often flash when my friends placed their faces in weird angles, or when the computer detected multiple faces.


Max Assignment 2: Eye Tracker (Progress)

This assignment basically revolved around the concept of determining the video frame based on x-position of the face. The sensing is where the camera tracks the movement of the x-coordinate of the face, the effecting is the playing of the video.

For this assignment, I decided to use my dog, Zack, to add personal touch to this assignment.

screen-shot-2017-02-06-at-9-55-29-pm

Initial video for tracking

zackky

However, as seen from above, it was hard to tell if the video frames were moving according to the face position. Thus I changed to this video of Zack.

zacky

Original video I used for final version, movement is more linear

screen-shot-2017-02-06-at-10-10-50-pm

Second video used for tracking as the linear movement of Zack helps to see if the video frames are corresponding with the face

Here’s the final output:

For this exercise, it helped me grasp a better understanding of the capabilities of cv.jit.faces, along with the mirror assignment. My trouble faced during this assignment was the calculation of the values to produce an accurate face track that correspond to the frame, I am still unable to be fully sure and certain about the calculations. Also, sometimes there were negative values.