Task: Create an eye or movement tracker that follows you as you move left and right
- Detection of your face, calculation of the x-coordinate of the midpoint (sensing)
- Uploading of eye movement clip, scale the frames of the video to the position of your detected face (effecting)
The first portion of this assignment involves detection of the face, inverting and extracting the coordinates using jit.iter, similar to the magic mirror.
New objects/messages used in this assignment: jit.movie @autostart 0, frame_true $1, bang, getframecount
(left) Calculation of the x-coordinate of the midpoint of the face detected: adding the top left and bottom right values then divide by 2.
(right) Uploading of video using read and jit.movie, play the video by a specific range of frames determined by the scale.
Jumping of the frame when no face is detected -> abrupt
Solution: if value <=0, then play frame (middle)
(To be updated!! Have yet to get this part^ to work)
Task: Using Max, create a virtual magic mirror that fades and brightens depending on the distance between the person and the mirror
- Detection of the person’s face and the calculation of the size of the face, marked by the green box (sensing)
- Inverting, changing the opacity of the image, rgb2luma and prepend frgb (effecting)
What is Max?
Max is a visual programming language that connects objects with virtual patch cords to create interactive sounds, graphics, and custom effects. Like a mind map, sort of.
This is my first experience with Max and I find it very different from the previous coding languages that we were exposed to in the previous semester. What I like about Max is that the mind map structure makes it easier to comprehend the function of the programme as a whole. However, the new terms, commands and flow of the programme was a little challenging to grasp.
Started off this assignment by learning the basic objects, messages, numbers and how to connect them using patch cords.
(right) The face detection is done using cv.jit faces, a function that scans a greyscale image for human faces. Hence, it is necessary to input jit.rgb2luma before cv.jit.
(left) The objects jit.iter and unpack separates the coordinates of the detected face into 4 values. To calculate the minimum and maximum area of the detected face (which directly proportional to the distance between the face and the screen), the x and y values are subtracted and multiplied. The resulting minimum and maximum areas are then scaled down to 0 and 1, before it is input into jit.op which controls the brightness.
Couldn’t get the programme to work for awhile because I mixed up the (n) objects and (m) messages, the number (i) and float (f).
Programme does not work with >1 detected faces on screen
Overall, it was a great learning experience and I look forward to exploring more features and possibilities with Max! 🙂