Green Screen Graffiti

So for the final assignment I decided to do alittle bit of research before diving straight in and thought that I would just search for “colour tracker MAX/MSP” on youtube. I found a handy tutorial that taught me how colour tracking basically works through the webcam and how it translates to the LCD player.

I made my own custom laser pointer by playing with how black would make a mask for the green “pointer” I’d be using.

I didn’t like it very much. It only drew rectangles that would be cleared(because I put clear infront of the message) but Margaret gave me the suggestion of using paintoval which was brilliant because now I realized I should’ve just went over to the LCD references.

I also didn’t like how it would just write and stay there so I decided to use what I’ve learnt during class and apply a jit.op @op * @val 0.999 that would hopefully give a slow fading effect because the opacity of the object would slowly fade into nothingness of the black. But it didn’t work.

From some googling I realized I should use jit.matrix memory and viola it magically worked. Lastly, I called for a jit.alphablend to see where I was drawing and that wasn’t too difficult for me. The end result is the video above.

This exercise was very effective as it helped me stretch my MAX head by implementing whatever I’ve learnt practically about the MAX to achieve the solution for the project without much hand holding. Thanks LPD!

veejay yay

Took LPD’s Draft 4 and took a good look at it before realizing that of course he wouldn’t give out the answers to his assignments. But Draft 4 was good enough as a base for me to understand what actually was going on.

I decided to tackle the motion regions first. The motion region was an important factor as it had to include the input of each region I wanted to target alongside with the output of a picture change and a sound change. The end product would be to send a bang to their respective Steps to trigger their individual sound and pictures. So the Steps(n) will act as an input for the following outputs.

I created 4 other routes and linked up the respective Steps(n) to the trigger that would call for a picture change and a sound trigger. I proceeded on to solve the picture’s appearance and sound’s trigger.

I decided to tackle the sound next. It was funny because after looking at the reference for playlist, I immediately went to rip some sounds off the internet and converted them into wav thinking wav files are the most universal audio source but nope, apparently the audio ONLY plays when it’s an .aiff file which isn’t and is sort of strange. I linked each sound to their relative position by calling a message with a number that would in turn call the sound in order to be played within the playlist which was pinged the TrigSound(n) that you’ll get from the main patch.

Finally I uploaded the pictures and called them through receiving a bang based on the main patcher which would then be outputted out to the jit player.

Overall was a fun experience to recreate something LPD did with a little twist to it. Can’t wait to see what we’ll learn in the next 2 years to come!

Bieber Fever

12/02/18
Monday, 2204

Back at it again and had a crazy idea of overlaying a smaller sharper face ontop of a blurrer face(caused by the jit.fastblur effect). Hmmm..

So using these 2 models, it wasn’t a perfect fit and I found out another problem with regarding how bang works(that 2 of them cannot be banging at the same time)

I dissected the problem slowly and understood the blur needed to be heavier to create a seamless face, and a crazy idea of extracting only the features of JB’s face because that’s all I really need.

And it worked pretty well I must say.

Decided to add a final light fastblur at the end before sending it out of the subpatcher to ensure a smoother experience.

Finally with all these, I conclude I am only left with 1 problem. How can I have multiple bangs coming into the matrix at the same time? 


06/02/18
Tuesday, 1540

Bieliebers unite! I decided to use Justin Bieber for my facial swap exercise. Below is a small montage of how many faces i tried to play with to get the most accurate bieber face swap on my face.

The results weren’t great at first..

I kinda dig it though LOL. But after getting the hang of it, it the process became quite natural. Eventually testing it on a friend, it seemed pretty in sync with his face.

But I was quite unhappy with how there were these sharp edges around the face. I would probably want to do something about it at a future date.

Mirror Mirror

Mirror Mirror

RAW VIDEO: https://vimeo.com/253456807/c6cc6acc5a

PS. It’s black because my face is way too near. Scroll down for my process journal!


30/01/18
Tuesday, 1604

Hmm.. During lesson I decided not to look at the model codes and wanted to homebrew my own nasty concoction!

Firstly, I restarted everything(ok that’s mainly because I have not started my trial)! I started a fresh with the mindset of completing the project only with efficient and clean blocks of code to maximise my thought process.

Started with the simple idea that I wanted to have a subpatcher to open up the web cam, 1 to do facial recognition and 1 to scale it for the slider.

This was the easy part. I referenced most of the codes from the help directory and what I’ve done last week. The harder part would be the calculations behind the scaler.

This may look simple because this was the final output but this went through many itinerations. Before arriving at my final code I was looking through the proper usage of scale to find the sweet spot and also how I could create the effect of dimming the mirror instead of the other way round(as below).

I automatically knew there was a problem with the scaler as it wasn’t giving me the output I needed(which is the percentage of my face per screen pixel ratio). However I was faced with the dilemma that I could not place my square area on the right as it would not automatically update if 76800px(320px*240px resolution) was a constant on my left.

I went on to check the references on MAX MSP Jitter’s website and found out about the !/(inverse division) and decided to give it a go! It was a success and gave me the exact percentage I needed to achieve my result!

It was a success! It was delivering exactly what I wanted! But because I was wearing glasses, the facial recognition wasn’t detecting me well enough. I even added a jit.window as a prelude that would lead me to my full screen!

Full screen was pretty simple! All I did was a simple Key > Select 32(spacebar) which was taught in class! However I identified a problem that the screen was flashing way too often due to the facial recognition not picking up faces properly. I knew I was done with it at this point but I wasn’t about to give up knowing that it isn’t perfect.

To tackle my problem, I had to stop the unnecessary flickering. I noticed whenever my face isn’t present, the screen turns dark and the value becomes 0. So I limited my scale patcher to never hit 0. This is important because then I’d split out the value of 0 to become 1 and let everything else flow as a per norm.(refer to my finished patcher at the start)

I searched far and wide the MAX MSP Jitter references, trying out line, line~, rampsmooth~ within the scaler, outside the scaler and within the main patcher and found out that all I needed was a parameter to be in between the values that could be changed thanks to a simple slide parameter that receives the values of my “face not present brightness 1” and my “varying brightness from square area” values. And it works like a charm!


23/01/18
Tuesday, 1618

Was super stoked and excited about facial recognition and had to show my friend how it worked! I basically applied whatever I’ve learnt and integrated the facial recognition but was unable to draw out the x1, y1, x2, y2 coordinates. Guess I’ll wait for LP to teach it next week!