MAX Assignment 5: Pixels

PIXIE PATCH:

I wanted to add a little more colours into the pixel to make it look more lively, and i chanced upon this object called jit.scalebias.

Jit.scalebias is a tool that helps you adjust the hues of the imported movie/camera grab, through changing the values of RGB/A attributes like rscale, rbias, gscale, gbias, bscale, bbias.

Input different colours into each of the four matrices…

Interactive II: FINAL

Presentation day setup:

Our homemade wands!

Projection on screen – we wanted to split the projection across all three screens initially, but were unable to calibrate all three of them. Hence, we displayed a foresty background on the other two sides!

 

Video documentation:

Interactive II: #3 Milestones

MILESTONE: One week before submission!

Merging our animation into the particle system:

Our 3D animation and 2D particle system rendered in the same space, but on different planes.

After a long while of experimenting and figuring out the patch, we finally managed to render the animation in point mode. We extracted the location matrix of the animation and linked it with the location matrix of the particle patch (and disabling the randomness generated by the noise). However, it resulted in a very glitchy spread of particles, although the general shape of the animation is there.

(Front view of the dog)

(Side profile of the dog)

After a few days, we finally managed to get a smooth 3D spread of particles for our dog – and animate it!

We also found a function called jit.xfade -> which allowed us to switch between two matrices, 0 being left matrix, 1 being the right matrix, and anything in between 0 and 1 displays a mix of the two.

Hence, our left matrix was the animated dog, and our right matrix was noise. We then connected the slider (scaled from 0 to 1) to our presence patch. Therefore, the faster the wand is waved, the more likely the dog is going to appear!

 

moving on to…

 

Physical setup:
We met up in school on a friday to build and test our smokescreen system. As we were unable to purchase dry ice on that particular day (it was a public holiday), we decided to experiment using a smoke machine we borrowed from school.

Connecting our hoses and hole-y pipe, and propping them onto a stand.

Unfortunately it was unfeasible to use the smoke machine (we kinda expected it actually), because the smoke was very hot and insufficient in amount. It overheated our funnel and melted the glue on the duct tapes.

A few days later…

Managed to get the dry ice (3.0kg) from the store! We built a pail system to channel the smoke from the pail up to the pipes, and dumped hot water onto the pail of dry ice to produce the smoke.

Result? -> Our system did work, but the smoke channelled out from the pipes were too weak and too little (like short wisps, definitely unable to form a screen). Hence, we decided to increase our amount of dry ice.

On the day of presentation…

We decided to ditch the smokescreen idea because the smoke wasnt solid enough, and we really needed a clear projection to see the shape and change of the particles.

Interactive II: #2 Milestones

WEEK 2 MILESTONES 4/4

What we achieved:
We sourced around for patches related to particles and camera blob tracking. Of the two particle patches we found, the flow and behaviour of one was smoother than the other. However, we had some problems understanding the patch because it had many foreign objects and attributes involved. (jit.gl, mesh, render, jit.gen, jit.expr)

We used cv.jit.blobs.centroids for the tracking of the light source, and cv.jit.module.presence for the detection of movement.
Sensing: Camera detection
Effecting: Drawing of blob at the brightest spot (depending on the set threshold and the brightness of the surroundings), and output the xy coordinates of that spot

We linked both cv.jit patches with the particles, hence the particles can now be tracked using the blob. We also altered the number of particles and level of attraction to suit our project, however, we failed to change the colour of the particles.

Animation: Started on the rendering of a 3D deer and a deerhound.

Challenges:

1. Can’t change colour of particles
2. Camera was laggy because there was two face detection going on

New problems: Need to understand this complicated patch.

 

WEEK 3 MILESTONES 11/4

What we achieved:
With Lpd’s help, we managed to change the colour of the particles. Turns out there was an extra cord connecting our jit.gl mesh with some random object and none of us noticed it.

We decided to add some audio effects into our patch – by linking it with the movement tracking (cv.jit.module.presence) and onebang. So, if player’ s movement speed > 80, magic wand sound effect will trigger.

We met up for a prop-making session to work on our smokescreen.
Some process pictures of us drilling holes into a PVC pipe that we found:

Animation: Managed to lock down all key poses and get some basic movements for the walk cycle of the deer and deerhound.

Challenges:

1. Time constraint
2. Overcrowded patcher (needs to split to two computers)
3. How do we put in the animation? (next step)

MAX Assignment 4: Impostor

Task: Superimposing a facial image of another person on your face on camera

  1. Detection of the face using the camera & drawing out its perimeters (sensing)
  2. Placing another face, upload a facial .jpg with black background
  3. Blending the image onto the detected face and follow it, soften its edges (effecting)

First part:
The detection of the face and calculating its perimeters is similar to what we’ve been doing for the previous exercises. (jit.grab + cv.jit.faces)

Second part:
Import your cropped image (just the face with black background) using either ‘read’ or ‘importmovie’. Remember to bang it when you run the patch.

Third part:
Define the position of the imported image, by using destination dimensions on the jit.matrix. The attributes ‘@dstdimstart’ and ‘@dstdimend’ tells the incoming matrix the specific start and end position where it should be. Therefore, this function can be used to match the location of the image to the x and y coordinates of the detected face.

Fourth part:
Blend the image and the detected face together using ‘jit.alphablend @mode 1’. Soften the edges using ‘jit.fastblur@mode 4 @range 3’.

Notes:

jit.matrix 4 char 320 240 @planemap 1 1 2 3 -> blends only with the red channel because of the ‘1 1’, therefore the blended alpha = red. For black and white images, use ‘1 1 1 1’ because you need to blend all.

jit.alphablend @mode 0 -> displays just the voldemort mask
@mode 1 -> displays both the mask and camera image (use this)

You can switch between colour and b/w version using ‘gswitch2’. However, I think the black and white version blends better and looks more natural as compared to the coloured one!

Video documentation

MAX Assignment 3: Photobooth

Task: Create a photobooth programme that directs the user to the centre of the screen, and takes a photo 3 seconds after he/she is in the right position.

  1. Detection of the face and drawing its perimeters (sensing)
  2. Different audio tracks triggered based on the position of the user (effecting):
    Right – play “move left!”
    Left – play “move right!”
    Up – play “go down!”
    Down – play “go up!”
  3. Take photo if no audio tracks are triggered for 3 seconds (effecting)

The first part is similar to the previous assignment: using jit.grab and cv.jit.faces to display and detect the human face. Draw out the perimeters of the face.

Using jit.iter, separate the four x and y values of the detected face. Depending on the dimensions and size of your camera window, determine the x and y values for the photobooth (a box area in the middle of the screen).

If the user’s position is out of bounds, the x value will be bigger than or smaller than the designated middle box, and the audio track will be triggered. If the value = 0 or within the perimeters of the box, nothing will happen.


If the user’s position remains within the middle box for more than 3 seconds, a screenshot will be taken and saved onto the desktop!

Video documentation

 

Notes:

X – metro 1000 – print turnon
                                – delay 250 – print turnoff (turns on AND off)
clocker 100: something like a stopwatch/timer (do something after 5000)
X – clocker 1000 – (i) – >10000 – (i) [changes from 0 to 1 when number is >10000]
X – metro 10 – O
                          – speedlim 1000 – O (limits the speed to 1bang/sec)
X – sel 0 1 – print zero (can have a bigger range of nums, if 1 do this or if 2 do this)
                    – print one
key – branches out to four keys (e.g. arrow up down left right)
(i) – split 0 10 – (i)  [if number between 0 and 10, number appears here]
                            – (i)  [if number >10, number appears here]
onebang – doesnt bang repeatedly, only once when triggered
X – metro 1000 – counter 0 1 100 – (i) [counts from 1 to 100, +1/second]
O – uzi 100 – print [prints 100 bangs in one go]
whats the difference between pack i i i and pak i i i?
-> pack only triggers when the left input changes
-> pak triggers when any one of the inputs changes

Interactive II: Description & Milestones

Team Pattymore: Anam, Isaac, Jessie, Joan
Project location: Installation @ room downstairs

Our aim is to create the “Patronus Charm” from the Harry Potter series. The Patronus Charm is a magic spell that wizards cast from their wands. It casts a white vapour which can transform into an animal. In our version, when the wand is waved, an animal appears in the vapour and when it is still, only the vapour appears. The spread of the particles in the vapour will also increase with the speed of the wand waving.

We plan to achieve this by using motion tracking via light and blob tracking in Max 7 to trigger the appearance of both the white vapour and the animal. The white vapour will be generated particles and the animal will be an animated video. We will blend the the particles and the animal to create a smooth transition based on the speed of the wand. The X and Y coordinates of the particles and animal will be determined by the wand’s location. The wand itself will have an LED on the end of it so the camera can track the user’s movement.

All of this will be projected onto a fogscreen to create the illusion that the patronus is in 3D. The fog screen will be designed based on tutorials such as this: https://prosauce.org/blog/2012/6/10/how-to-diy-improved-inexpensive-fog-screen.html And our aim is to create an effect like this one:

 

Tutorials on generating particles in Max:

 

Project Milestones:
Our plan for the next few weeks! -> https://docs.google.com/spreadsheets/d/1NpC8NnpvjGQyTXnlQejEV3mqLUMhFA-0V5pfKfUKBlI/edit?usp=sharing

MAX Assignment 2: Eye Tracker

Task: Create an eye or movement tracker that follows you as you move left and right

  1. Detection of your face, calculation of the x-coordinate of the midpoint (sensing)
  2. Uploading of eye movement clip, scale the frames of the video to the position of your detected face (effecting)

The first portion of this assignment involves detection of the face, inverting and extracting the coordinates using jit.iter, similar to the magic mirror.

eye1

New objects/messages used in this assignment: jit.movie @autostart 0, frame_true $1, bang, getframecount

eye3eye2

(left) Calculation of the x-coordinate of the midpoint of the face detected: adding the top left and bottom right values then divide by 2.

(right) Uploading of video using read and jit.movie, play the video by a specific range of frames determined by the scale.

 

assignmt2

Problems encountered:

Jumping of the frame when no face is detected -> abrupt
Solution: if value <=0, then play frame (middle)

(To be updated!! Have yet to get this part^ to work)

MAX Assignment 1: Magic Mirror

Task: Using Max, create a virtual magic mirror that fades and brightens depending on the distance between the person and the mirror

  1. Detection of the person’s face and the calculation of the size of the face, marked by the green box (sensing)
  2. Inverting, changing the opacity of the image, rgb2luma and prepend frgb (effecting)

What is Max?
Max is a visual programming language that connects objects with virtual patch cords to create interactive sounds, graphics, and custom effects. Like a mind map, sort of.

This is my first experience with Max and I find it very different from the previous coding languages that we were exposed to in the previous semester. What I like about Max is that the mind map structure makes it easier to comprehend the function of the programme as a whole. However, the new terms, commands and flow of the programme was a little challenging to grasp.

untitled
Started off this assignment by learning the basic objects, messages, numbers and how to connect them using patch cords.

untitled3   untitled2
(right) The face detection is done using cv.jit faces, a function that scans a greyscale image for human faces. Hence, it is necessary to input jit.rgb2luma before cv.jit.

(left) The objects jit.iter and unpack separates the coordinates of the detected face into 4 values. To calculate the minimum and maximum area of the detected face (which directly proportional to the distance between the face and the screen), the x and y values are subtracted and multiplied. The resulting minimum and maximum areas are then scaled down to 0 and 1, before it is input into jit.op which controls the brightness.

ezgif-com-crop

Problems encountered:
Couldn’t get the programme to work for awhile because I mixed up the (n) objects and (m) messages, the number (i) and float (f).

Limitations:
Programme does not work with >1 detected faces on screen

Overall, it was a great learning experience and I look forward to exploring more features and possibilities with Max! 🙂