To break down the graffiti laser, you would need a jit.grab for the camera, suckah for extracting pixel information from the camera, and jit.draw which uses the pixel coordinates to draw out a series of ellipses on the screen.
The swatch tool identifies a specific colour from the suckah and the coordinates are then pumped down.
The tracking would be better if lighting conditions were better and if there was clearer contrast between the tracked color and the camera.
Due to recent trends of VR and AR, Mark and I wanted to achieve interaction with a hologram.
Hardware/Software Leap Motion – Hand gestures Max MSP – 3D interaction and sound Vvvv – 3D graphics
Mac monitor & Acryllic sheet – hologram
How are we going to achieve this?
Leap motion transforms the hand gestures into coordinates which will then be implemented into Max MSP. With that, the coordinates will then influence a 3D model/object ‘s rotation, scale and movement. Sound input will also be coordinated and inputted in max. Last but not least, we plan to explore Vvvv for 3D graphics generation to incoporate into the hologram.
Tried out with the sequencer and arranged the motion detection boxes in 2 rows of 6. I used coins as the tool for contrast. There are still problems with the second row, third and fourth boxes in which everytime a coin goes over, the sound keeps playing extremely fast – sounds like a glitch. I’m not too sure if its the threshold issue or the lighting but I’ll try to figure it out.
I wanted to create a music festival effect in which you take the place of Martin Garrix (DJ) and lights flash over the place. I started with the original template and slowly worked with what I had to produce this effect. Just like in previous times, facial recognition was used to draw the facial points which were then incorporated into Martin’s face.
After that, I wanted a lighting effect which revolves playing with the hue of the background. The patcher is called jit.hue. Accompanied with a randomizer, the patcher gives off a colourful partyish vibe.
Soundtrack and lights are added in the form of a aiff file and a movie. They are later blended together using the alpha blend patcher.
Martin’s face is then added on top of the different layers to map over the user’s face. I used jit.brcosa to control the brightness, contrast and saturation of his face and this helps in blending as well as making the face pop.
This is the final blueprint for the patcher. I tried using an alpha mask to soften the edges of the face as well as photoshopped the face (Softening the sides) but to no avail as the edges still remained harsh.
I’ll try to make the face tracking smoother as right now, it is still jittery and flickers from time to time.
Facial detection seems to work fine with this background and specs on.
It also seems to pick up any entity with a *facial* structure. Overall, there are still some issues with the final output such as fluttering of the values which causes the brightness to fluctuate haphazardly. I am planning to adjust the settings to allow the mirror to stay brightened up even when there are no faces detected which as of now, does not do so. There’s still much to learn and explore in Max as conquering little projects such as this helps build my problem-solving skills which are essential in the programming world.