RM4 – Eyes Remake

This project was first of all the most challenging among the 4 remakes. Even with the help of the example codes, I still struggled a lot.

At first, I had trouble trying to understand the code provided, so I tried to use blob detection to detect the head as a whole with the blob coordinates as the values to be used to turn the servos. However, the blob detection did not work as it was difficult to filter one blob alone and it was unpredictable as to which part of the face the blob was detecting. Hence, this method is very unreliable and I decided to go back to the example code we were provided with.

Testing with ZigSim

I decided to work on the Pan only first for the remake because I was not confident of making 2 axis work. So first, I needed to test the servos with the ZigSim to make sure that the servos actually work and that the 2 scripts can communicate. From the example code of the “servos”, I changed the rpi.ip to the IP address of my raspberry pi as follows:

I also connected my wires in the form of red ( #4), yellow(#5) and black (#9). This is in terms of PIN number, not GPIO number. Hence, I also had to change the pinPan number to 3:

On the sensing side, there was not much to change except for the rpi.ip which is “172.20.10.12” and the IP address of my computer: “172.20.10.11” which was used to connect the ZigSim to my computer. One very important thing to note is to always enter “sudo pigpiod” in the terminal whenever the PyCharm is restarted. I kept forgetting about this part resulting in an AttributeError whenever I tried to communicate the motor and sensing code.

         

The next obstacle was the movement of the servos. I managed to get the communication between ZigSim and the servos going, but the rotation of the servos was unstable and laggy. I thought it was a problem with the code, but it turns out that the message rate that the ZigSim was sending to the code file was at 1 message per second, which resulted in a slow response on the servo side. I changed the message format to 60 per second and the movement became smooth. Hence, I could deduce that the code has no problem and I could move on to the facial recognition.

 

Testing with Facial Recognition

On the servo side, I did not have to change the code, so I focused on editing the sensing script. First, I had to remove all the lines relating to the ZigSim because I did not need them already. I then added the facial recognition script from RM3 to the sensing script as shown below. I edited the msg.add_arg(pAngle) from before to the line seen in the screenshot below. This instructs the script to use the coordinates of the facial recognition point as values to be sent to the servos script. 

I compared the facial recognition script from RM3 to the one I changed in RM4:

I deduced that the definition of normx = pAngle, so added this to the script too for pAngle:

I finally managed to get the facial recognition going. For the purpose of this remake, I only needed the coordinates of one point. Hence, I chose point #30 because it is located in the middle of the face.

Eventually, I managed to get the 2 scripts to communicate, but for some reason the servos were not moving. I compared the values communicated when the ZigSim was used and the ones being sent over by the facial recognition and found that while the angles sent by the ZigSim were between 0 – 180, the angles sent by the facial recognition was around 0.5. I guessed that maybe the problem was that the values sent by the facial recognition was in radians. I searched online and most of the information regarding the rotation of servos were in degrees. Hence, I decided to change the values sent over to degrees by multiplying the values and it worked.

Final Output

I thought that the rotation of the servo with respect to our head movements resembled a cctv camera, hence I made one and stuck it on the servo.

RM2 – Graffiti Remake

For this project, there were a few things to change and adapt to with regards to the final output of interaction. The main things that needed to be adjusted was:

    1. Colour of the circle around the blob detected
    2. Area of the blob detected

I played with the distance of the camera to the projected surface, the distance from where the laser pointer started and the area of blobs detected. Based on the above conditions, I fixed the position of the camera first. The final projection was a little off because there was a restriction on the HDMI wire and I could not get the camera and projector to align. After experimenting with the values, I arrived at the final area parameters as shown below. This parameter was to make sure every laser blob was detected and also stop the programme from reading the projected circles which gathered together as one big blob.

I also changed the circle colours to one that is slightly dimmer to minimise the chances of the projected circles being read as a blob. In the end, I settled with a dark purple colour.

The following is a documentation of the interaction. I didn’t manage to align the camera and projector, so the final output is not aligned.

 

RM1 | Mirror ReMake

While researching how to progress forward and trying to understand what each node does in this face tracking project, I came across this function called jit.world. It allowed the camera screen to show as a separate window which allowed me to go into fullscreen mode. With that function in mind, I decided to try working with jit.world first for the fullscreen mode.

I added the basic nodes which allowed the camera to turn on and off. I also realized that the camera dimensions are not to scale and to solve that, I played around with the transform option and realized that the second one helps fit the video to the screen.

I quickly also realized that the camera is capturing a mirrored view. To fix that, jit.dimmap @invert had to be added to flip the matrix of the video. I had to downsize the camera frame for more efficient and better tracking results with jit.matrix. I tried playing around with the dimensions to reach one that is not too blurry and yet delivers on the face tracking well enough.

When getting the radius measurements of the tracking bounding box, a reverse subtraction of the smaller coordinate value from the bigger coordinate value is needed. The final value is what determines the size of the head detached and thus determines the brightness of the camera. At this point, I added the scale node and I could turn the data into 0-1 values for the brightness indicator. However, the values only turned from 0 or 1, it didn’t have a range of values. I decided to revisit the example patch and realized that instead of only using 1 value from the coordinate difference and leaving the other one free, maybe the numbers have to be multiplied to get the entire area of the detecting square so as to get the range of the value. 

For quite some time, I could not figure out what went wrong as I see the values changing but the brightness of the video is not changing. I experimented with the connection and tried a variety of connections. In the end, I manage to get it to work but I am not sure why and how. It was more of a trial and error.

Only the combination above worked, the rest of the 3 combinations does not.

With regards to face detection, I found out that even with multiple faces detected, the program will only read the values for the nearest face. Hence even with one face at the back and one face near the camera, the video will still dim itself. And when testing for the single user, I discovered that if your specs are somewhat squared (like mine), the face detection might sometimes read it as a face also, cause it to not be as responsive. Below is a video of the interaction.