Green Screen Graffiti

So for the final assignment I decided to do alittle bit of research before diving straight in and thought that I would just search for “colour tracker MAX/MSP” on youtube. I found a handy tutorial that taught me how colour tracking basically works through the webcam and how it translates to the LCD player.

I made my own custom laser pointer by playing with how black would make a mask for the green “pointer” I’d be using.

I didn’t like it very much. It only drew rectangles that would be cleared(because I put clear infront of the message) but Margaret gave me the suggestion of using paintoval which was brilliant because now I realized I should’ve just went over to the LCD references.

I also didn’t like how it would just write and stay there so I decided to use what I’ve learnt during class and apply a jit.op @op * @val 0.999 that would hopefully give a slow fading effect because the opacity of the object would slowly fade into nothingness of the black. But it didn’t work.

From some googling I realized I should use jit.matrix memory and viola it magically worked. Lastly, I called for a jit.alphablend to see where I was drawing and that wasn’t too difficult for me. The end result is the video above.

This exercise was very effective as it helped me stretch my MAX head by implementing whatever I’ve learnt practically about the MAX to achieve the solution for the project without much hand holding. Thanks LPD!

veejay yay

Took LPD’s Draft 4 and took a good look at it before realizing that of course he wouldn’t give out the answers to his assignments. But Draft 4 was good enough as a base for me to understand what actually was going on.

I decided to tackle the motion regions first. The motion region was an important factor as it had to include the input of each region I wanted to target alongside with the output of a picture change and a sound change. The end product would be to send a bang to their respective Steps to trigger their individual sound and pictures. So the Steps(n) will act as an input for the following outputs.

I created 4 other routes and linked up the respective Steps(n) to the trigger that would call for a picture change and a sound trigger. I proceeded on to solve the picture’s appearance and sound’s trigger.

I decided to tackle the sound next. It was funny because after looking at the reference for playlist, I immediately went to rip some sounds off the internet and converted them into wav thinking wav files are the most universal audio source but nope, apparently the audio ONLY plays when it’s an .aiff file which isn’t and is sort of strange. I linked each sound to their relative position by calling a message with a number that would in turn call the sound in order to be played within the playlist which was pinged the TrigSound(n) that you’ll get from the main patch.

Finally I uploaded the pictures and called them through receiving a bang based on the main patcher which would then be outputted out to the jit player.

Overall was a fun experience to recreate something LPD did with a little twist to it. Can’t wait to see what we’ll learn in the next 2 years to come!