MAX – Graffiti Research Lab [Interactive 2]

I spent quite a lot of time watching tutorials to figure out what to do.

I think what I’m supposed to do is to get print on a jit.lcd a moving line of sorts from the camera detecting a moving laser light or object. I don’t really understand the code from the dropbox so I tried doing a different one.

Here is what I have so far. Again, I don’t know where I went wrong such that my screens are black.

Okay, I figured that I didn’t connect qmetro to my jit.lcd directly thats why the visual did not appear. So currently it sort of works in that I think it detects my red colour shirt or a darker area and draws the giant oval according to where I move.

Above: Using my shirt to test.

Above: Using some black giant pen tool against a white paper background to test.


Above: I found it pretty cool that it can track the multiple times I moved then stopped for a while (see the diff coloured greens/pinks for each set of movements).

I went to clean up the code some more and took out unnecessary stuff.

I added Jit.fastblur to play around if it’ll help and it changes it to black oval “lines” for me as well as being more sensitive in that it only detects the red of my shirt.

I realised it worked better when the lights are off, against a white background. I used a red marker to achieve this results.

MAX – I Am VeeJee [Interactive 2]

First, I had to make 8 region boxes appear on my screen since there are supposed to be 8 regions to trigger 8 sounds hence under motion region, I added their respective coordinates as well as their “routes”. I found the key to changing that hiding in the “p GrabSequence”. Here, I realized that to draw the regions, the line of code meant 1 x y x y 2 x y x y …. The first digit meant the first box or the box “name” you’re allocating it to. The x and y s are the coordinates of the region box. After making the region boxes, I increased the threshold so it can sense the contrast difference and trigger sound when something moves over the region.

I managed to make sounds play from triggering the regions. However, I’m unable to figure out the slider to directly trigger the sounds.

MAX – Brad Pitt [Interactive 2]

I decided to use Constance Wu’s face because of her role in the comedy Fresh Off The Boat, as well as the new movie Crazy Rich Asians.
Somehow I keep having difficulty making the screens appear. The numbers input does not run either.

You can see the black screens above. I’ve tried deleting the black screens and copy pasting new ones from working video boxes yet it still doesn’t work.

 

I turned on another computer to try. I decided to use back Brad Pitt’s face to do all testing first before carrying on,The screens all appear now (aka no black screens), however, the Brad Pitt image refuses to appear over my face under the bottom middle  as well as the bottom right string of codes.

Frustrated, I airdropped everything onto a third computer to try. It didn’t work either. The face refuses to appear.

The second and third computers that I worked on.

I played around with the different modes of the alpha blend. I concluded that by changing it, it would not affect why my Brad Pitt face wasn’t plastering over mine in real time, rather it sort of changed the effects. I double checked that I’ve banged everything and still no luck. Also, the “s zzz” and “r zzz” does nothing, probably for printing purposes.

6.5 hours later, and hogging multiple computers, I conclude that I really don’t know why. I’ve consulted Mark, Margaret and Tisya who were kind enough to lend their time to try helping me solve the issue, despite that all was in vain.


2 days later, I’m trying it again. It works now!

 

So now I just had a little fun playing around with the jit.fastblur functions/values.

It kinda looks like a Chinese opera mask now, it seems to map the major similar facial areas of the image.

I doubled the ripple value from the previous image and it turns the image into this more intricate pixel areas of sorts.

Here I made the center value higher, it seems like the center makes the image more blur. I’m not too sure though.

MAX – Electronic Mirror [Interactive 2]

 

For this homework, we were tasked to try to recreate Christian Muller’s Electronic Mirror. The idea is such that the camera is able to detect a face. When one is further away from the camera, the image is clear, yet as one’s face approaches the camera (i.e. facial features seem to be getting bigger), the video playback display darkens to black.

LPD’s

Mine.
Changes made:
1. Inverted the video playback on screen so that when I raise my right hand, my hand on the right side of the screen shows up.
Learnt that: RGB2luma helps convert the image to black and white, so it makes it less complicated (colour is complicated, whereas BnW is more simplified). This helps the computer detect the features of a face (darker coloured eye areas and mouth area).
 LPD’s.

Mine.
Changes made:
1. In line object, changed the second number of ’50’ to ‘100’.
2. In scale object. changed the second number ‘12345’ to ‘40000’. The latter number was chosen as when I put my face closer to the screen, the rough number registered is around 40000. By doing so, it ensures that when the input is 40000, the screen will change to black.

Challenges:
Figuring out the vocabulary/buzzwords. What each special vocab means or what it can do.