Category Archives: Process

Final Project: Still a work in progress!

Daily Drink Dilemma is still in progress but here’s what (more) that we’ve done!

We’ve churned out the images and linked them with buttons for viewers to click, using send/receive images.

After which, we linked the play of our first video (the yawn challenge) with the start of the yawning test (by faceosc).

We also linked it to stop the video if the viewer yawns, with a delay of 10 seconds. Following this, we’ll be linking whichever outcome to the temperature sensor.

One issue with the current faceosc detection is because if the viewer doesn’t yawn, it is detected as soon as the yawn challenge video plays. And if you notice in the video above, as soon as the video plays, an image of the cafe girl with an empty speech bubble appears (that is set as the trigger of if the viewer doesn’t yawn).

Currently, an option would be to see if it would be able to first bang the same image as the previous one (to seem as if there was no trigger), place a timer and then bang another image after the set timing (to decide that the viewer did not yawn).

We will also have to set the timer for the temperature sensor in order to collect a value within the given timing once touched (otherwise it may sense the before/after temperature instead).

 

Final Project: Process

For our Daily Drink Dilemma, we have decided to tweak it a little to help our audience decide on what they should drink based on different sensors.

  1. Yawning!

Using FaceOSC + MaxMSP, we’ll be tracking the user’s mouth, and for every time that the person yawns (or open his/her mouth wider than a certain value), it will trigger something. And if it doesn’t, it triggers something else.

For now, we’ve used songs for the trigger, and programmed it so that if one plays, the other stops.

(Sped up the vid & audio he he)

2. Temperature

We also added in the temperature sensor (but we don’t have a video of it yet) and for now, if the temperature goes above baseline temp, it turns a red LED on, if the temperature goes below baseline temp, it turns a blue LED on!

Now we need to figure out how to connect MAXMsp to Arduino and get the triggering sequence right! And also, we’ve to figure how we’re gonna present this as well :-)

Currently an idea we have would be:

and instructions/videos will be placed within the speech bubble. Finally, the end product (drink) will also be presented by her!

(Not gonna be using cookin’ mama.. but some other illustration haha) Once we get the connections done we will be thinking of other stuff to add on!

What should I drink (Final Project Idea Proposal) – Valerie & Chloe

“Story” based interactive project

To let people know our thought process when deciding to get the specific drinks.

Source of Idea

I think the idea came about when we realized that every time we attend a class, when we have a meal, doing work, we will always get a drink. And we realized that our drink choices really reflect us and also how we feel at that point in time.

Design

We would want to make it look more minimalistic, and the different kinds of drinks would probably be in the form of cards, something like a menu!

Programming on MAX 

Using jit.grab (webcam) as well as colour tracking, the drinks will be differentiated by the colours of the drinks.

Once the drinks are placed in front of the webcam, the tracker will sense the different colours and the colours will be triggers.

Because we feel like you would get different drinks based on your mood, like whether you feel thirsty, or you want something sweeter, or a coffee to wake you up! So the drinks as triggers will place the viewer into the situation of that mood.

  • Projection of colours picked onto a glass (filled with water?)
  • Sensor: “Double tap” to like.
  • Swipe right/left – Sugar level
  • Use of sound: e.g. music, dialogue.
  • Colour filters or image filters too.This will be further explored.

EM2/IAmBradPitt

For EM2 (Brad Pitt), I decided to use Vanessa Hudgens (HSM!!!) instead hahaha.

I think the first 2 steps were pretty okay. Step 1 was mostly about reading the image, and replacing the image according to the different dimensions of the face tracker, and step 2 was face tracking and inverting of the video.

STEP 3…

(Didn’t change the send/receiving patcher names from brad to vanessa because I don’t think it would affect!)

Changed the jit.fastblur range to make the image clearer because the lower values felt too blur to blend into(?) my face.

I think the funniest part about HSM that I remembered was when Vanessa Hudgens stood out from the entire crowd and just shouted “TROYYYYY”, so I decided to add a trigger to play a sound (adapted from mirror patcher), using brightness.

Set a condition that if the brightness value went below 0.5, the audio would play. Took me awhile to get it and I didn’t know that just by typing “<0.5” in a patcher would get the condition. However the problem with this is that the value < 0.5 jumps a lot, and the audio restarts every time the value changes, instead of playing it as a whole once the value is <0.5. Still have yet to figure this out. :(

(Will not be adding in the actual audio because you can’t really hear it but you can see the play/stop button triggered every time the value falls below 0.5.)

 

EM3 IAmVEJEE

Issues faced:
Initially, I couldn’t get each different “button” to play a different tune. Whenever i hovered over the area it would just play the same 1 track.

So I decided to open the pPix patcher and realized that I could change the outlet square to r Step(number), which then connects to the various tracks.

Another issue is that the tracks played are only once, it doesn’t go on repeat until i remove the toggle. :-( Also, the areas are quite sensitive so other tracks may be triggered unintentionally.