What should I drink (Final Project Idea Proposal) – Valerie & Chloe

“Story” based interactive project

To let people know our thought process when deciding to get the specific drinks.

Source of Idea

I think the idea came about when we realized that every time we attend a class, when we have a meal, doing work, we will always get a drink. And we realized that our drink choices really reflect us and also how we feel at that point in time.


We would want to make it look more minimalistic, and the different kinds of drinks would probably be in the form of cards, something like a menu!

Programming on MAX 

Using jit.grab (webcam) as well as colour tracking, the drinks will be differentiated by the colours of the drinks.

Once the drinks are placed in front of the webcam, the tracker will sense the different colours and the colours will be triggers.

Because we feel like you would get different drinks based on your mood, like whether you feel thirsty, or you want something sweeter, or a coffee to wake you up! So the drinks as triggers will place the viewer into the situation of that mood.

  • Projection of colours picked onto a glass (filled with water?)
  • Sensor: “Double tap” to like.
  • Swipe right/left – Sugar level
  • Use of sound: e.g. music, dialogue.
  • Colour filters or image filters too.This will be further explored.


For EM2 (Brad Pitt), I decided to use Vanessa Hudgens (HSM!!!) instead hahaha.

I think the first 2 steps were pretty okay. Step 1 was mostly about reading the image, and replacing the image according to the different dimensions of the face tracker, and step 2 was face tracking and inverting of the video.


(Didn’t change the send/receiving patcher names from brad to vanessa because I don’t think it would affect!)

Changed the jit.fastblur range to make the image clearer because the lower values felt too blur to blend into(?) my face.

I think the funniest part about HSM that I remembered was when Vanessa Hudgens stood out from the entire crowd and just shouted “TROYYYYY”, so I decided to add a trigger to play a sound (adapted from mirror patcher), using brightness.

Set a condition that if the brightness value went below 0.5, the audio would play. Took me awhile to get it and I didn’t know that just by typing “<0.5” in a patcher would get the condition. However the problem with this is that the value < 0.5 jumps a lot, and the audio restarts every time the value changes, instead of playing it as a whole once the value is <0.5. Still have yet to figure this out. :(

(Will not be adding in the actual audio because you can’t really hear it but you can see the play/stop button triggered every time the value falls below 0.5.)



Issues faced:
Initially, I couldn’t get each different “button” to play a different tune. Whenever i hovered over the area it would just play the same 1 track.

So I decided to open the pPix patcher and realized that I could change the outlet square to r Step(number), which then connects to the various tracks.

Another issue is that the tracks played are only once, it doesn’t go on repeat until i remove the toggle. :-( Also, the areas are quite sensitive so other tracks may be triggered unintentionally.