Semester Project – Ms Manners

For the semester project, I wanted to make use of what I have learnt in the previous remakes. And given that the time frame is not very long for the semester project, I also decided to keep things simple. Hence, I thought of this project which makes use of motion tracking translating into servo movements from Remake 4 and Wekinator from Remake 3.

Ms Manners is a project derived from the idea of reciprocating others when they are courteous and greet you, inspired from the courteous culture in Japan. In this project, a single servo with a doll attached to the needle will mirror a bow when the camera sense a bow from the person.

Code Flow

The project flow starts with the sensing side. I used MediaPipe Holistic to sense the whole body movement because I wanted to register a particular body position moving, not just a single point moving. Also if I used only a single point, it would be the same as Remake 4 and I wanted to try something different, hence I used MediaPipe Holistic. MediaPipe Holistic consist of 543 points which includes the general body position points, the detailed hand points and detailed facial points. For the purpose of this project, I only needed the body position points, so I only took into account the 33 body points.

The points are processed in Wekinator. I set the algorithm in Wekinator to register 1 output with 2 values, 1 and 0. When the body is upright, The value will be 0. When the body is in a bowing position, the value is 1. The script is programmed such that when 0 is received, a value of 70 degrees is sent to the raspberry pi to move the servo motor. When 1 is received, a value of 160 degrees is sent over. For some reason, when 170 degrees or 180 degrees was sent over, the servo motor would not respond and it did not move.

Problems Faced

The first problem faced was the impact of indentation. In the first image, the indentation for the message loop was not done properly. As a result, only 2 inputs were sent over to Wekinator when there was supposed to be 64 points sent to Wekinator. After doing the proper indention, the Wekinator was finally receiving the correct number of input from the sensing script and the algorithm was ready to run.

Another problem was the degrees of the servo motor as mentioned. I tried with 180 degrees at first but the servo motor does not seem to recognise this angle and it didn’t move. The same thing happened when I tried with 170 degrees. The motor only started moving when I tried with 160. Since the original value was 90 at upright and 180 when bowing, I changed the final values to 70 at upright and 160 when bowing to get the 90 degrees bow effect.

After solving that problem, the rest of the steps were not as difficult. Below is the final video for Ms Manners.

RM4 – Eyes Remake

This project was first of all the most challenging among the 4 remakes. Even with the help of the example codes, I still struggled a lot.

At first, I had trouble trying to understand the code provided, so I tried to use blob detection to detect the head as a whole with the blob coordinates as the values to be used to turn the servos. However, the blob detection did not work as it was difficult to filter one blob alone and it was unpredictable as to which part of the face the blob was detecting. Hence, this method is very unreliable and I decided to go back to the example code we were provided with.

Testing with ZigSim

I decided to work on the Pan only first for the remake because I was not confident of making 2 axis work. So first, I needed to test the servos with the ZigSim to make sure that the servos actually work and that the 2 scripts can communicate. From the example code of the “servos”, I changed the rpi.ip to the IP address of my raspberry pi as follows:

I also connected my wires in the form of red ( #4), yellow(#5) and black (#9). This is in terms of PIN number, not GPIO number. Hence, I also had to change the pinPan number to 3:

On the sensing side, there was not much to change except for the rpi.ip which is “172.20.10.12” and the IP address of my computer: “172.20.10.11” which was used to connect the ZigSim to my computer. One very important thing to note is to always enter “sudo pigpiod” in the terminal whenever the PyCharm is restarted. I kept forgetting about this part resulting in an AttributeError whenever I tried to communicate the motor and sensing code.

         

The next obstacle was the movement of the servos. I managed to get the communication between ZigSim and the servos going, but the rotation of the servos was unstable and laggy. I thought it was a problem with the code, but it turns out that the message rate that the ZigSim was sending to the code file was at 1 message per second, which resulted in a slow response on the servo side. I changed the message format to 60 per second and the movement became smooth. Hence, I could deduce that the code has no problem and I could move on to the facial recognition.

 

Testing with Facial Recognition

On the servo side, I did not have to change the code, so I focused on editing the sensing script. First, I had to remove all the lines relating to the ZigSim because I did not need them already. I then added the facial recognition script from RM3 to the sensing script as shown below. I edited the msg.add_arg(pAngle) from before to the line seen in the screenshot below. This instructs the script to use the coordinates of the facial recognition point as values to be sent to the servos script. 

I compared the facial recognition script from RM3 to the one I changed in RM4:

I deduced that the definition of normx = pAngle, so added this to the script too for pAngle:

I finally managed to get the facial recognition going. For the purpose of this remake, I only needed the coordinates of one point. Hence, I chose point #30 because it is located in the middle of the face.

Eventually, I managed to get the 2 scripts to communicate, but for some reason the servos were not moving. I compared the values communicated when the ZigSim was used and the ones being sent over by the facial recognition and found that while the angles sent by the ZigSim were between 0 – 180, the angles sent by the facial recognition was around 0.5. I guessed that maybe the problem was that the values sent by the facial recognition was in radians. I searched online and most of the information regarding the rotation of servos were in degrees. Hence, I decided to change the values sent over to degrees by multiplying the values and it worked.

Final Output

I thought that the rotation of the servo with respect to our head movements resembled a cctv camera, hence I made one and stuck it on the servo.

RM3 – Cheese Remake

For the first step, I had to make some minor changes to the example code indoor to get the face recognition to work. One of which included the resizing of the frame window in order to get the exact dimensions of the screen in proportion and to prevent the face from being squashed. I am not sure whether the output seen in the window will affect the facial recognition but I readjusted it just in case. The comparison can be seen below.

The next step was to combine the 2 sample scripts of the face detection and output together. I managed to check that it is working. But the face detection was not very stable. At times, the detection was very smooth but on other occasions, the facial markings had to be very precise to change the value.

Next up, I wanted to add sound, but for some reason I could not seem to install PyAudio. Hence I found another method which was to use Pygame to play the sound instead.

The following is the final video:

RM2 – Graffiti Remake

For this project, there were a few things to change and adapt to with regards to the final output of interaction. The main things that needed to be adjusted was:

    1. Colour of the circle around the blob detected
    2. Area of the blob detected

I played with the distance of the camera to the projected surface, the distance from where the laser pointer started and the area of blobs detected. Based on the above conditions, I fixed the position of the camera first. The final projection was a little off because there was a restriction on the HDMI wire and I could not get the camera and projector to align. After experimenting with the values, I arrived at the final area parameters as shown below. This parameter was to make sure every laser blob was detected and also stop the programme from reading the projected circles which gathered together as one big blob.

I also changed the circle colours to one that is slightly dimmer to minimise the chances of the projected circles being read as a blob. In the end, I settled with a dark purple colour.

The following is a documentation of the interaction. I didn’t manage to align the camera and projector, so the final output is not aligned.

 

RM1 | Mirror ReMake

While researching how to progress forward and trying to understand what each node does in this face tracking project, I came across this function called jit.world. It allowed the camera screen to show as a separate window which allowed me to go into fullscreen mode. With that function in mind, I decided to try working with jit.world first for the fullscreen mode.

I added the basic nodes which allowed the camera to turn on and off. I also realized that the camera dimensions are not to scale and to solve that, I played around with the transform option and realized that the second one helps fit the video to the screen.

I quickly also realized that the camera is capturing a mirrored view. To fix that, jit.dimmap @invert had to be added to flip the matrix of the video. I had to downsize the camera frame for more efficient and better tracking results with jit.matrix. I tried playing around with the dimensions to reach one that is not too blurry and yet delivers on the face tracking well enough.

When getting the radius measurements of the tracking bounding box, a reverse subtraction of the smaller coordinate value from the bigger coordinate value is needed. The final value is what determines the size of the head detached and thus determines the brightness of the camera. At this point, I added the scale node and I could turn the data into 0-1 values for the brightness indicator. However, the values only turned from 0 or 1, it didn’t have a range of values. I decided to revisit the example patch and realized that instead of only using 1 value from the coordinate difference and leaving the other one free, maybe the numbers have to be multiplied to get the entire area of the detecting square so as to get the range of the value. 

For quite some time, I could not figure out what went wrong as I see the values changing but the brightness of the video is not changing. I experimented with the connection and tried a variety of connections. In the end, I manage to get it to work but I am not sure why and how. It was more of a trial and error.

Only the combination above worked, the rest of the 3 combinations does not.

With regards to face detection, I found out that even with multiple faces detected, the program will only read the values for the nearest face. Hence even with one face at the back and one face near the camera, the video will still dim itself. And when testing for the single user, I discovered that if your specs are somewhat squared (like mine), the face detection might sometimes read it as a face also, cause it to not be as responsive. Below is a video of the interaction.

 

 

To Bauhaus and Beyond Reflection

In this week’s lecture, I really enjoy the works of Piet Mondrian. More specifically, I like the art style that he uses. Even though the motifs are simple, straight forward and geometric, the contrast of the lines and primary colours gives it a very vibrant and attractive look. I like the sense of rhythm that the work presents viewers with.

Personally, I really like the use of primary colours because the contrast produces harmony and it is pleasing to the eye. I also really like the products inspired by Piet Mondrian and his works. They give off a cheery yet not tacky look. It also has a modern touch to it. This design can also be incorporated into fashion, permutating each clothing design with change the colour sizes, line placement and no. of lines. The possibilities of this design is endless.

Reflection – Industrial Reflection and Graphic Reactions

What interests me the most about this lecture is the topic on Ukiyo-e. Also known as “The Floating World”, Ukiyo-e is essentially a style of Japanese woodblock prints and painting from the Edo period. It depicts everyday life in Japan, courtesans, romantic landscape and erotica. 

I enjoy looking at the content because the subject matter in Ukiyo-e are very fascinating. By looking t Ukiyo-e artworks, it is almost as if fantasy tales of geishas, ninjas and samurais come to life. The artworks depict them in action, almost like depicting a scene in a story book. I also really like the sceneries depicted in the artworks. Scenes of a  majestic wave, the Mt Fuji and a Japanese village in winter can be spotted easily. Looking at these light hearted depictions of Japanese life takes one through a journey through ancient Japan. These prints and paintings are like an illustrated documentary of ancient Japanese life. Every new artwork has its own charm and alluring factor. The colour used in Ukiyo-e can also be very calming and pleasant to look at.

Reading Assignment Reflection

I find this particular article very interesting. To many, games are just games. A mere form of entertainment to pass time. Children find them fun while parents think games are bad influence. However beyond everything games are more than just what we see on the surface. Different games have different appeals, each creating their own little world where players get lost in this immersive experience. 

 

Background information from article:

Video games became popular in the early 1980s and it is a way in which children, teenagers, and adults encountered the computer. How we interact with computers influences our view of this world and our perspective on ourselves. Many deem games as a bad influence for children because while and after playing the game they act up and seem addicted to the game. By 1982, people spent more money on video games than on movies and records combined. Sherry writes that video games are windows into a new kind of intimacy with machines that is a factor of the new emerging computer culture.

 

Reflection:

I agree that video games have a kind of hypnotic fascination, it is a kind of “computer holding power” as the author said. Video games can be attractive for many reasons depending on the nature of the game. But one main idea which ties all video games together is that the video games are interactive computer microworlds. Each world has its own rules governing the game. These rules shape the nature of the game and its appeal to players. 

For example, the game “Pac-Man”, which is the first game to be acknowledged as part of the national culture. Such a simple game yet there is so much more to it. Playing the game requires quick reflexes and good hand-eye coordination. But what is more important is for players to figure out the rules that govern the way Pac-Man behaves and the pursuing monsters. A game which seems very easy to understand and play, yet mastering it requires a certain level of skills from the player. A good Pac-Man player will know to constantly alternate between offensive(eat energy cookies to eat monsters) and defensive(avoid monsters and eat the dots) to advance in the game. Although too many people Pac-Man might just be a video game, it actually might just be more similar to an intellectual game of chess than we might know. I feel that more than just playing the game, the key to winning and advancing is observing the patterns in the game and finding the set of rules which governs the game.

There are no limits to how a game can appear, the objects(items, characters, setting) can fly, accelerate, turn, change shape and colour, they do/be anything. The only limitation is the designer’s imagination and capabilities(which can always be increased through learning).

Video games are not limited by the rules which govern the real world, such as gravity and reality, hence it allows games to become a more accurate expression of the designer’s intentions and the player’s actions.

One such game which is a good example of how video games are not restricted by the real world is “Pinball”. While the actual game has levers that rust and the machine tilted to a particular slant because of the floor and other varying factors that differs every time we play, video games emit all these varying factors and the surroundings does not affect the game. As said in the article, “It is always the same, reacting almost spontaneously.” Another advantage for games like Pinball is that its video game counterpart records everything. Most importantly, it records the high scores and initials of those who played the game, something which the Pinball machine could not do. In my opinion, this is a huge appealing fact of video games as it allows players to leave their digital footprint behind and gain the admiration and recognition from those who play the same game. Personally, I think that it is a form of motivation to want to be better at the game and it also helps to foster a community with healthy competition among players. And because a video game can be programmed to have a variety of designs, it is possible for the game to respond to the level of the player’s skill – increase in difficulty as one progresses to the next level. On the other hand, a regular pinball machine cannot do this.

Video games provide imaginative worlds into which people enter as participants of the world. Technological advancements have allowed game designers to run wild with their imagination and they can do so much more than before. New graphics allow objects to be more realistic and a game like “poker” could simulate real players with different personalities like that of an actual game instead of just robots.

Another popular kind of video game – one of which I really enjoy are interactive novels. Instead of just watching a story unfold like that in a book or movie, players can now make their own decisions. Choosing the route, meeting different characters and encountering different situations. This kind of interaction creates an immersive experience for the player, giving players the choice of choosing their fate. Games like “Choices” and “Episode” are examples of popular interactive novels where players play as main characters in a story.

One of the most attractive parts of video games today is the fact that games give us a world which we can once only fantasize about.Things like magic and potions, hunting monsters and going on a wild adventure used to be a fantasy, but now through video games people can live their dream. One of my favourite video games which offer this kind of escape from the dull ordinary life that is reality is Maplestory. Not constrained by the rules in the physical world, players can teleport, use magic, tame ancient beasts, encounter weird items that do not exist in real life and go on wild quests. All these are close to impossible in the real world. But because Maplestory offer such an experience, people like me who look towards a life outside of this physical world turn to video games.

However enticing video games are, we have to recognise the danger of video games that is the fact that it is easy for players to get lost in a simulated world, especially one that is so real. To the point where some people cannot separate reality from a simulated world. Games offer the possibility of creating and working within an artificial world, simulating the behaviour of economies, political systems and society. Games like “Second Life” can simulate the reality so well that some people get lost in this make-believe world. In Second life, players can go to school, find a job, settle down and form a family, commit crimes and basically do things like that in real life. The appeal of such games is the touch of realness it has, yet player need to take responsibility for their actions in the real, because as real as the game gets, it is afterall “just a game”.

All in all, I feel that video games are not all that bad. It is not as bad of an influence like what some people say. We can partake in gaming, but we need to know the difference between a game and the real world. Too much of something is never good, there has to come to a point where we know how to moderate the amount of time we spent on games. However, it is good that games offer us a route for escape once in a while and break free from the rules of the real world.