Particle Rhythm seeks to explore generation of music through movement using Kinect, Ableton Live and MAX MSP. All sounds were generated live and nothing was pre-recorded.

The visuals generated with the music

The installation involved 3 computers and 2 Kinects. We use a dark room to enhance the atmosphere of the installation.

The setup of the installation

Depending on the intensity of the movement generated by the user, the sound generated and warped also varied. We had a couple of dancer friends to try out the Kinect and the results generated were rather experimental and interesting.

My friend trying out the Kinect

Ableton Live

MAX MSP

One of the major challenges definitely was the computer power which mostly could not keep up with the layers in MAX MSP, and thus the visuals were not as pleasing. Also, the sounds generated through mapping are rather raw and at times inaccurate due to the mapping of coordinates. The mapping of coordinates will definitely be something to overcome.

Overall, we have learnt quite a lot about MAX MSP and Ableton Live through this final project, and we would definitely want to take this further in the artistic level. In the future as we develop this, we would like to experiment with concept and music – maybe even pure soundscapes and no EDM track. Perhaps there could even be a thematic focus – e.g. nature.

We had to create a photobooth in Max whereby the user’s face needs to be positioned in the middle of the webcam – using instructions like move left/move right/move up/move down. It was an interesting exercise as when I had my friends to try this thing being created, they had amusing reactions. It was great that they had fun (and confused)!

Initial testing of the interface using sample sounds

When the face is center, all sounds will stop playing in which we know it is fit to take a photo.

More finalized when all the parameters has been set up

 

Difficulties faced doing this assignment would definitely be gauging the position of the center, its parameters (how much has the face moved before an instruction has to come in). Currently, I was gauging the center by my human eye which was bound to have parallax error, in which a calculation would be a smoother method. Also, the distance of the face relatively to the webcam was not heavily accounted for as well.

Also, the sound would not sound as smooth in its repeat, setting up a better delaly would be good. These are areas I would strive to improve on.