Category: Interactive 2

[Interactive II] – Milestone 16 Apr

“Cattitude”
by Putri Dina and Hannah Kwah

Milestones
Project Proposal  |  28th Mar  |  4th Apr  |  11th Apr  |  12th Apr  |  15th Apr  |  16th Apr  |  17th Apr  |  Final Documentation

Building the cat – part 3
The next step was to place the Touch sensors at specific locations where they are the cat’s sensitive spots. 


After placing the Touch sensors, we decided to add a fur like texture for people to want to interact with the cat. The problem we had was to make sure that when the Touch sensors are at these places, the fabric on lying above would not trigger the sounds.

Almost finalised look of the cat.

Moving forward
– Debugging of the patch to trigger one sound at the time

– Finishing up the cat

Assignment 5: Pixels

The problem I faced was calling the videos to play. I tried to ‘importvideo’ and ‘read’ the videos but it gave me errors. I decided to use the movie file source and finally the videos I recorded appeared. I had fun creating my own assets and also moving around to see different videos of myself playing.

Through this I learned about pixels and detecting different greyscale opacity to call the videos.

Here is the link to the assignment:

 

Assignment 4: Face Recognition with alphablending (I am Brad Pitt)

I attempted to blend the other face onto my face but it gave me a black screen at first. After playing around with the patch, I got it to blend but it was not according to the my face size but the screen.

I tried to blend the face together with my face and it seems to blend it quite nicely.

There were many trials and error while doing it as the face did not blend it nicely at first. Through this exercise I learn about the function of ‘s’ and ‘r’ where I could use the ‘s’ function to send the message and connecting the ‘r’ function to call the message instead of having multiple connections leading to confusion.

I learned the alphablend function and I found it very useful to incorporate 2 or more elements together so that it creates an image. When playing around with the @planemap numbers, it gave me a different dimension for the other face and also the different @mode gave a different effect to the video too. I had fun playing around with the assignment.

Here’s the link to the assignment:

[Interactive II] Milestone – 11 Apr

“Cattitude”
by Putri Dina and Hannah Kwah

Milestones
Project Proposal  |  28th Mar  |  4th Apr  |  11th Apr  |  12th Apr  |  15th Apr  |  16th Apr  |  17th Apr  |  Final Documentation

Our group were still struggling to make the Touch sensors play one sound when a person touches 2 Touch sensors at the same time. At the same time, we decided to start building our cat so that we could combine the current technology with the build up.

Mock ups
We started off by doing up a small prototype to test out.  Here are some parts of the cat in different parts.

We are attempting to piece the pieces together to get the rough structure of the cat. 

After trying out with the paper structure, we attempted to work on the finalised structure.
Building the cat – part 1
We decided to build the cat out of cardboard as it is sturdy to withstand the pressure by the people interacting with it. As there were many parts of the cat we wanted to place the Touch sensors on, we needed to consider the structure of the cat too. Cardboard is not a flexible material, we were struggling to bend it to make the shape visible and gluing them together.



Building the cat – part 2
We had to combine the different parts of the cat together to form the structure. It was not an easy process as we had quite a number of hiccups while combining them together.


 

[Interactive 2] Milestone – 28 Mar

“Cattitude”
by Putri Dina and Hannah Kwah

Milestones
Project Proposal  |  28th Mar  |  4th Apr  |  11th Apr  |  12th Apr  |  15th Apr  |  16th Apr  |  17th Apr  |  Final Documentation

Particle Videos

Neutral State

Human Detected State

Chin

Head

Back

Tailbone

Stomach

 

Resources
EditorX: http://infusionsystems.com/catalog/product_info.php/products_id/403

Overview

Select the USB port

Switch the sensor input that we are using. When pressed the touch sensor, a graph will be shown in the form of a wave to show the touch response. Multiple sensors can be used at the same time (up to 8 sensors).

Click the edit button to specifically manage the input

After clicking the edit button, a drop down will appear. This is optional for us to edit the values.

Touch v1.5: http://infusionsystems.com/catalog/product_info.php/products_id/135

 

Max Patch

Experimenting with the values from EditorX to MAX.

 

Feedbacks from LPD

  • Make the interaction richer
  • What happens when 2 sensors are touched at the same time
  • Currently, the project is at entry level, make it more complex
    E.g.: When user strokes both cheeks, the cat will purr. However, after a long period of time, it will get annoyed and respond differently.
  • The video can be more dynamic since it is currently focusing on levels of emotions. Add elements to surprise the user.

 

Moving forward
Finalize concept – what will happen when multiple parts are triggered.
Get the patches to work with multiple touch sensors
Get values for each sensor
Prototype & buy materials

Assignment 3: Selfie

Video documentation:

 

Learning Outcomes:
I learn to use playlist~ to playback the sounds that were recorded and using the previous knowledge I had on face detection to determine where the photo will be captured at. I had fun trying to take a selfie based on face detection.

Difficulties faced:
I had many difficulties trying to detect the photo area as it changes every time I moved or the distance. The other problem was trying to play the sounds based on the face detected at a specific area and also calling a countdown sound to indicate a picture will be taken.

 

Tryout – Asking my brother to tryout my selfie assignment & his reaction towards it:

 

Further/future development:
I hope to be able to make the face detection and photo area smoother so that the user will not have a difficult time to find the specific area. The playback sounds for the countdown plays too fast when the user is not at the specific area, I would like to improve that area too so that the user would not feel irritated when there are multiple sounds played. Not only that, I also hope to create a playback when the person is too close or too far from the screen.

Assignment 2: Eye Detection

Eye Detection

Eye Detection

Objective: When your eyes move left and right, the playback video will actually follow the eye direction.

Sensing: Computer Camera and detecting the movement of the eyes.
Effecting: Tracking and Video playback
Computing: MAX

Assignment 2 was quite similar to assignment 1, Mirror yet it was different where I had to figure out the movement of the eye. The first step taken was to video myself turning from left to right. It was quite awkward videoing myself doing it but it was fun! The next step was to plan out what to do: when the eyes detect movement then the video will be played back.

I worked on the video playback frame by frame first. I learned the code about jit.movie for the first time. I also discovered that there was a frame by frame section which was what I needed. I struggled with it as I did not understand what the code ‘face_true $1 bang’ and how ‘scale’ works in this context. After getting the hang of it, I could get the video to show the playback.

The other part was to get the face detection to work with the video. I was having problems trying to solve the video playback as it was going to the left only. When I managed to solve the video to go towards the right, the video doesn’t go to the left. I decided to record the video again and check the codes in the face detection. I realised it was having negative numbers and after fixing the numbers in the face detection, jit.iter then the video playback worked moving right and left but it was not smooth. I changed the numbers on the ‘scale’ and the movement of the video playback became much smoother. I am satisfied with the work and will work on how to add the sounds when moving left to right.

Documentation:

Assignment 1: Mirror

Sensing: Computer camera and the face size to detect the distance of the object or person.
Effecting: Reflect, Opacity and Transition of the video/image.
Computing: MAX

Assignment 1 was quite a challenge since it was my first time using MAX. It was a fun assignment to learn more about MAX and facing many trials and errors. I had problem getting the box to appear on the face, finding out how the image can be inverted, having the video to be brighter as it was quite dark previously and getting the video to show colours instead of it being greyscale.

Process

Process of opacity and what is needed to be done.

Through assignment 1, I learned about face detection, change in opacity, transition. The problem I face now is the transition not being very smooth but I am satisfied with it.

Documentation: