Featuring Zack the Corgi and the imposter, via Max face changer

My sister with Zack, this was an amusing moment

 


 

 

 

 

 

 

 

Things to improve on:

  • Having Corgi face bigger and fitted into the human face
  • At times the detector will glitch when multiple faces are inside. For future development, will be something interesting to have face tracker with multiple faces inside, something like a Snapchat filter

 

 

 

 

 

 

 

Max Assignment 1: Mirror

A fun documentation on using this Max Mirror with friends.

(Side fact: The documentation was taken during a Birthday dinner and thus the spontaneity at the ending 😉 )

Our task was to create a mirror that dims when we go closer to it. Sensing was done with the webcam, detecting the face (cv.jit faces) and the distance of it from the webcam. Effecting was done via the dimming of the screen as the faces gets closer.

screen-shot-2017-02-06-at-12-18-24-pm

Screenshot of the Max file

This exercise was my first time trying out MAX MSP. It was quite confusing at times but it helped me gather a better understanding regarding the subject. What I understood about MAX was the concept of “sending messages”, nodes that make it more straightforward in the input/output. There are definitely much to still learn about this.

sequence-01-00_00_16_06-still002

Example of interacting with the Mirror

There were some limitations that I got stuck with. As seen from my documentation video, the screen would often flash when my friends placed their faces in weird angles, or when the computer detected multiple faces.


Max Assignment 2: Eye Tracker (Progress)

This assignment basically revolved around the concept of determining the video frame based on x-position of the face. The sensing is where the camera tracks the movement of the x-coordinate of the face, the effecting is the playing of the video.

For this assignment, I decided to use my dog, Zack, to add personal touch to this assignment.

screen-shot-2017-02-06-at-9-55-29-pm

Initial video for tracking

zackky

However, as seen from above, it was hard to tell if the video frames were moving according to the face position. Thus I changed to this video of Zack.

zacky

Original video I used for final version, movement is more linear

screen-shot-2017-02-06-at-10-10-50-pm

Second video used for tracking as the linear movement of Zack helps to see if the video frames are corresponding with the face

Here’s the final output:

For this exercise, it helped me grasp a better understanding of the capabilities of cv.jit.faces, along with the mirror assignment. My trouble faced during this assignment was the calculation of the values to produce an accurate face track that correspond to the frame, I am still unable to be fully sure and certain about the calculations. Also, sometimes there were negative values.