RM2 – Graffiti Remake

For this project, there were a few things to change and adapt to with regards to the final output of interaction. The main things that needed to be adjusted was:

    1. Colour of the circle around the blob detected
    2. Area of the blob detected

I played with the distance of the camera to the projected surface, the distance from where the laser pointer started and the area of blobs detected. Based on the above conditions, I fixed the position of the camera first. The final projection was a little off because there was a restriction on the HDMI wire and I could not get the camera and projector to align. After experimenting with the values, I arrived at the final area parameters as shown below. This parameter was to make sure every laser blob was detected and also stop the programme from reading the projected circles which gathered together as one big blob.

I also changed the circle colours to one that is slightly dimmer to minimise the chances of the projected circles being read as a blob. In the end, I settled with a dark purple colour.

The following is a documentation of the interaction. I didn’t manage to align the camera and projector, so the final output is not aligned.

 

RM1 | Mirror ReMake

While researching how to progress forward and trying to understand what each node does in this face tracking project, I came across this function called jit.world. It allowed the camera screen to show as a separate window which allowed me to go into fullscreen mode. With that function in mind, I decided to try working with jit.world first for the fullscreen mode.

I added the basic nodes which allowed the camera to turn on and off. I also realized that the camera dimensions are not to scale and to solve that, I played around with the transform option and realized that the second one helps fit the video to the screen.

I quickly also realized that the camera is capturing a mirrored view. To fix that, jit.dimmap @invert had to be added to flip the matrix of the video. I had to downsize the camera frame for more efficient and better tracking results with jit.matrix. I tried playing around with the dimensions to reach one that is not too blurry and yet delivers on the face tracking well enough.

When getting the radius measurements of the tracking bounding box, a reverse subtraction of the smaller coordinate value from the bigger coordinate value is needed. The final value is what determines the size of the head detached and thus determines the brightness of the camera. At this point, I added the scale node and I could turn the data into 0-1 values for the brightness indicator. However, the values only turned from 0 or 1, it didn’t have a range of values. I decided to revisit the example patch and realized that instead of only using 1 value from the coordinate difference and leaving the other one free, maybe the numbers have to be multiplied to get the entire area of the detecting square so as to get the range of the value. 

For quite some time, I could not figure out what went wrong as I see the values changing but the brightness of the video is not changing. I experimented with the connection and tried a variety of connections. In the end, I manage to get it to work but I am not sure why and how. It was more of a trial and error.

Only the combination above worked, the rest of the 3 combinations does not.

With regards to face detection, I found out that even with multiple faces detected, the program will only read the values for the nearest face. Hence even with one face at the back and one face near the camera, the video will still dim itself. And when testing for the single user, I discovered that if your specs are somewhat squared (like mine), the face detection might sometimes read it as a face also, cause it to not be as responsive. Below is a video of the interaction.