Sketch: LED Room

Final Video Presentation:

I created the video story with idea 2,3,5 and 6.

Idea 2 – used in 2 instances, the first scene when I picked up my phone and the second scene when I was hiding my phone from my brother. (Privacy)

Idea 3 – used in the last scene when my mum keeps nagging at me to eat and I just cant watch my show in peace. (Warning sign to others)

Idea 5 – used when I was going to start watching my Netflix show. (Ambient Lighting)

Idea 6 – used when I was replying to a text message. (Signal busyness?)

The main goal I set for myself when I was using prototyping using ZIGSIM and OSC Communications was to get the various feedback seamlessly, all in one run.

First problem I encountered was if I wanted to use 2 different sensors, on Wekinator, I would not have the full set of input readings. More explained in this video: https://youtu.be/8VaB7EYs04k
Solution after consultation was to have more than 1 port I am transmitting the data to, so each Wekinator file would just feel from 1 sensor and not have their readings messed up.

Second problem was by combining the various ideas together, I would have to note that some of my sensors don’t work well together. This was a big problem.

If you take a look at my FIRST DRAFT for my storyline, there was so many sensors that couldn’t be applied to purpose I wanted it for. First example would be using compass to cue ambient lighting. When I wanted to use compass at first, it was very specific, from 90 degrees to 180 degrees, in my head it should work every time. BUT unaware (/dumb) me forgot that the Earth’s North is constantly changing so there were many times I find myself coming back to my code that was once working, no longer working anymore. So I decided to use initially wanted to use acceleration but this might overlap with picking up my phone that could be read as a gesture if I was too aggressive with rotating it or if I didn’t have a reliable set of examples for accurate machine learning. I ended up using Quaternion for rotation sensing.

And in order for it not to infer with throwing phone (which also uses quaternion), my solution was to combine sensor readings in Processing itself instead of Wekinator. So I had to also incorporate 3D touch to trigger flashes to happen but the hue of reds are taken by the the x coordinates from Quaternion.

In the end, I chose to present my main prototype as a projection because with the LED Strip, I couldn’t achieve certain lighting as such black lighting (for privacy) or various tint of red was not obvious when theres many pixel going off at one time.

All in all, there was A LOT of trial and error but I genuinely enjoyed learning ZIGSIM and Wekinator. No doubt there are some ideas that I still have a hard time executing such as Idea 1 but I still see so much potential in this application. Very intriguing to know that my phone could essentially be my modern day mood ring!