Overview – “Beyond What We See”

As presented from last week, our group is interested in making an interactive movie game based on mental health in Singapore. The consequences of the storyline will be based on the player’s choices during the course of the narrative.

Our story is based on the life story of Sha, currently 24 years old. She had depression and Borderline Personality Disorder.

Our full presentation:

narratives-for-interactionnarratives-for-interaction


Moving ahead from the presentation, we kicked start our project this week in terms of finalizing the planning process. Agenda achieved:

  • Interviewed Sha (main protagonist based on her)
  • Storyline planning + branching choices
sha

Interviewing Sha was very pleasant as she was open to sharing about her experiences

 

From our interview, we were able to hear her personal story of her battle with her mental problems – she felt left out at home, was bullied in Secondary school and Polytechnic, not understood by her peers and at times her visits to the psychiatrists proved ineffective. She is a success story – able to overcome her hurdles, she now an advocate and she has an app geared towards helping those with mental health problems.

After the interview, we were able to get a clearer gauge on how to complete the story. We have finished the story and this is the outline on the branching plot.

21f31dd5-c323-4ff1-8c48-9421a5e14e4e

With the planning process more or less finalized, we are now ready to proceed in the making of this project.

Max Assignment 1: Mirror

A fun documentation on using this Max Mirror with friends.

(Side fact: The documentation was taken during a Birthday dinner and thus the spontaneity at the ending 😉 )

Our task was to create a mirror that dims when we go closer to it. Sensing was done with the webcam, detecting the face (cv.jit faces) and the distance of it from the webcam. Effecting was done via the dimming of the screen as the faces gets closer.

screen-shot-2017-02-06-at-12-18-24-pm

Screenshot of the Max file

This exercise was my first time trying out MAX MSP. It was quite confusing at times but it helped me gather a better understanding regarding the subject. What I understood about MAX was the concept of “sending messages”, nodes that make it more straightforward in the input/output. There are definitely much to still learn about this.

sequence-01-00_00_16_06-still002

Example of interacting with the Mirror

There were some limitations that I got stuck with. As seen from my documentation video, the screen would often flash when my friends placed their faces in weird angles, or when the computer detected multiple faces.


Max Assignment 2: Eye Tracker (Progress)

This assignment basically revolved around the concept of determining the video frame based on x-position of the face. The sensing is where the camera tracks the movement of the x-coordinate of the face, the effecting is the playing of the video.

For this assignment, I decided to use my dog, Zack, to add personal touch to this assignment.

screen-shot-2017-02-06-at-9-55-29-pm

Initial video for tracking

zackky

However, as seen from above, it was hard to tell if the video frames were moving according to the face position. Thus I changed to this video of Zack.

zacky

Original video I used for final version, movement is more linear

screen-shot-2017-02-06-at-10-10-50-pm

Second video used for tracking as the linear movement of Zack helps to see if the video frames are corresponding with the face

Here’s the final output:

For this exercise, it helped me grasp a better understanding of the capabilities of cv.jit.faces, along with the mirror assignment. My trouble faced during this assignment was the calculation of the values to produce an accurate face track that correspond to the frame, I am still unable to be fully sure and certain about the calculations. Also, sometimes there were negative values.