Sensing: Using the web camera and face location to determine one is at the centre point

Effecting: Tracking

Computing: MAX

Assignment: Using Max as the platform, create a photo booth that gives audio commands to the centre of the and takes a photo when in the right position.

Similarly, in assignment it makes use of what we learnt about jit.grab and cv.jit.faces for display and face detection. What’s new was audio and programming it to take a photo.

Challenges: Firstly, it took me many trials and errors to figure out the centre position and the correct audio to be triggered. After that it took me awhile to programme the countdown 3seconds when in the right position to take a picture. However, multiple audio still plays whenever triggered where it’s a possible further development.

password: photobooth

Sensing: Using the camera in my laptop and face location to determine the video frame

Effecting: Tracking

Computing: MAX

Assignment: Using Max as the platform, create video that acts as a movement tracker that follows you as you move sideways.

This second assignment is almost like an extension from the first where we still have to make use of what we learnt about face detecting but instead of changing opacity, it is to locate the face and match it with the designated frame from the pre-recorded video. Compared to the first assignment, tackling the first part of this was much easier as it was using basically almost the same theory but just with a few tweaks to fit the task. 

 Challenges: The main problem was figuring out what values to use for the scale. I had to go through several trial and errors before finally settling on the most suitable one. Another problem was when no face was detected, the frame did not freeze at the spot it was last at and it always jumped to the frame that showed my finger pointing to the extreme left (have yet to fix it). I also faced the same problem (room lighting problem) when documenting.

password: maxtracker

Sensing: Using the camera in my laptop and face size to detect distance of person.

Effecting: Reflecting, opacity and transitioning

Computing: MAX

Assignment: Using Max as the platform, create a virtual magic mirror that depends on the distance between a person and the mirror which result in video brightening up (far) or fading out (near).

Programming language is a generally new thing to me and Max is no different. This is the first time I’m experiencing it and I feel that it is quite different from the other languages that I learnt last semester. However, I do find that this language is relatively much easier to understand and use? I like how it functions like a mind map that is interconnected to help one see the full picture more easily.

mirror

Challenges: The programme could not run properly or like how it is expected to initially because I mixed up the “objects” with “messages” and “float” with “number”. The connections were also sometimes difficult for me to grasp as to which wire connects to which box. The main problem I faced was adjusting the values so that i have the opacity and transition that i wanted (scale & jit.op). I also found it hard to document my work if the lighting wasn’t good enough as the face detector tends to detect many other things(non-faces objects included) aside from my face, which caused the programme to glitch. This also raises the issue of if more than one object is detected at different areas & of different coordinates, the programme may flicker and “glitch”.

password:maxmagicmirror