To see other Assignments:

Assignment 1: Magic Mirror  |  Assignment 2: Face Tracking  |  Assignment 3: Seflie Instructor  |   Assignment 4: Alpha Blending   |   Assignment 5: Pixelation Mapping

For this third assignment, we are using MAX  and MSP to create a Selfie Instructor. And just to note, I did not add any music or additional voice overlays in the video, so what you hear is only the tracks invoked by MSP as well as my own reactions.

It basically detects where your face is on the screen and gives you instructions. So end result is to get you facing forward looking at the camera with your face in the center before it says “OKAY” and allow you to take the picture! Which you can take simply by making a sound. KA-Cha!

Sensing.

  • Coordinate info from cvjit.faces matrix used to calculate the position of face with respect to the screen
  • cvjit.faces coordinate info is then used to determine if the face is in the LEFT, RIGHT, UP, DOWN quadrant of the screen, or if there are no faces at all
  • MSP can detect if sounds are made  through the Microphone

Effecting.

  • Relative position of the face (LEFT, RIGHT, UP, DOWN quadrant) from Sensing is then used to:
    • Invoke audio playback of respective tracks on a playlist in MSP
    • Display instructions on the screen corresponding to the audio playback
    • So basically telling the user to go in the directions towards center of screen depending on which quadrant the face is in  and to also alert them that the app is running and to look at the screen should they be looking away
  • When user is in the centralized position, the screen will display “OKAY”, and the MSP sound detection through Microphone mentioned earlier on will be activated. User can then tap or say something at the screen to have their photo taken.

 

It was great fun doing this project and I kind of wanted to have a bit of humor as you can see, because I like the idea of machines scolding people and making them do stuff. Also it was good to finally be able to attempt combining MAX and MSP together in the same patcher. Having a proper layout was also essential in helping the process along so that I could have great visualization of the dataflow.

Of course, there are still some issues. Like displaying the images there will be this glitchy effect because the bangs are being outputted at a steady rate when user goes into any of the quadrants. Although in this case it kind of worked to the assignment’s advantage as it creates a sense of urgency to follow the instructions the user is being yelled at.

However would be good to know if there are elegant methods of controlling these flows. I know that the onebang can be used as a stopper valve for this and I actually used that for the center quadrant. However for the side quadrants you tend to not want too long a delay for the onebang or else it does not reset itself quick enough to react to the next position the user gets into. Whereas for the center it is fine because they will stay still to have their picture taken. So that is kind of the reflection on this assignment and I hope to learn more subsequently!

Process video log is below! Cheers 🙂