IDev | LED Room Sketch

Brief
To alter and interact with the lighting of a physical room by using gestures on a smartphone.

Ideation

Ideation discussion was initiated during online class session on Microsoft Teams.
Worked on concept directions as a pair. Presented by Jasmine and Yi Dan.

I selected Idea 5 to be developed using Processing and ZIGSIM.

I felt out of all the ideas proposed, Idea 5 had an antagonistic and taunting impression to it, something I found interesting as compared to other subtle and performance-as-intended ideas. However, as I explored and developed my project, I decided to incorporate both antagonising and soothing after effects just to test the possibilities.

Modes of Interaction
Typing on a smartphone screen rapidly flashes different colours of lighting.
Orientating landscape mode activates a gradual change in lighting hues.
Swipe up on the screen mimicking closing of apps before bedtime, dimming the light.
Swipe left on the screen to mimic rapid anxiety, displaying cooling hues.
Circular gestures on the screen to signal playfulness, displays vivid colours.

Objectives of Interaction
To discourage texting/digital conversations by coordinating typing on smartphones with rapid flashing of different coloured lights in a room.

To discourage passive watching of shows by changing the colour of room lighting to a bright and overbearing colour.

To encourage and support users’ room activities based on muscle-memory gestures performed on device.

Code Prototype

Processing Output

Initial phase of Sketch completed by 6th September 2020.
Animated GIF

Processing to Arduino Input/Output

Attempted and updated on 10 September 2020, using base code provided.
Imported serial library into existing code.

Initially had trouble connecting Processing to Arduino, as I couldn’t figure out why the port was not connected. Realised that my computer somehow picked up a few other available ports and was connected to the wrong one. Counted the port number and revised the number in the code.

From highlighted line onwards: consulted LP on how to set a threshold for quaternion rotation, as my current code allowed the light intensity to continue increasing past the intended orientation. Retrieved z-axis range from running the code, added a defined scale range and remapped it to fit 0.0 (darkest) to 1.0 (brightest) intensity.
In draw function, attempted to add in messaging to Arduino to affect the LED strip.

Processing + LED Strip Outcome (1st try-out)

Refinements in Processing to Arduino Input/Output

As witnessed in the immediate demonstration video above, the x and y inputs were still detected at the top of the mobile phone screen. As I wanted the outcome to follow closely to my intended concept (which was to only have activated interaction between user typing on mobile keyboard that resulted in flashing lights), I set a threshold limit to the range of detection for 2DTouch on ZIGSIM

(Above, Left): Code snippet of my learning and refinements to the parameters of 2DTouch on mobile device; (Above, Right): Sketch planning and attempt at understanding the logic between X, Y mapping, and my intended outcome.

Additional try-outs and learning (not in final outcome)

Wekinator to Processing: 2DTouch, Dynamic Time Warp
I attempted to incorporate Wekinator to communicate with Processing and Arduino LED strip. After spending a few days reading documentations on the official Wekinator website, YouTube, and forums, I am still unable to retrieve a match reading from Wekinator to Processing. (Processing to Wekinator is detected, but no match result received to perform action)

Attempted sample code for ‘Continuous Color Control’, unable to achieve colour control.

(Above): Tried to study and understand Wekindator sample code for Simple Color Control. However, inputs were seamlessly received, outputs could not be retrieved to change colour of test window.

Attempted to study and understand ‘Mouse X, Y Position for Training Dynamic Time Warping’, still cannot decipher which part of the connection is lost.

I compiled sample codes from multiple Processing samples on Wekinator, and attempted to run a code connecting ZIGSIM to Wekinator with 2DTouch input. The 2DTouch was active, and Wekinator received and validated gesture matches, but somehow unable to send data back to Processing to perform any task.

(Above): Tried compiling Wekinator sample code into my completed code for Processing/Arduino –– inputs are received but outputs are not. Test window was to change colour upon 2DTouch movement detected and matched.

After consultation, managed to establish connection between Wekinator and Processing. Proceeded to add descriptors to each gesture recognised to affect the LED strip.

(Above): Success! Gestures matched on Wekinator are finally acknowledged and deciphered in Processing.

(Above): Sample of Wekinator programme detecting a match in gesture 3 (circles)

ZIGSIM to Processing: 3DTouch
Instead, tried to find alternative and explore adding another gesture via ZIGSIM to increase interaction of gestures and lighting. Used 3DTouch with the idea of needing 3 fingers to apply long-press pressure on phone screen to stop lights from annoying user. With the fingers on screen to stop the light, it also renders the user’s mobility and phone screen useless.

However, the 3DTouch function seemed to have conflicts in the final draw outcome, thus I decided to remove it from the final outcome as this was also just a trial-and-error experimentation to better understand and study the relationship of softwares and data input/output.

(Left): Added and defined 3DTouch parameters and functions; (Right): 3 blocks of code represent my 3 revised attempts to programme an output that detects 3-finger pressure of 2.0 to turn off all LED lights/turn Processing background black –– outcome was still not achieved.

Final Outcome
An interactive relationship that either antagonises mobile users in a room into abandoning their devices or soothes them through muscle-memory gestures on devices.

Final Thoughts/Takeaways
I believe the process and learning journey is just as – if not more – important than the outcome. I am fully satisfied that I am able to produce an intended outcome that is true to my concept sketch, but I am more thrilled that I managed to learn Processing just a little bit better than when I first started out.

I took on this Sketch as an individual submission, being fully aware of my capabilities in code and programming, as I wanted to depend on myself completely to learn and solve issues rather than only do a delegated portion of work if I were in a group setting. After all, that’s the reason why I’m in university and in this class to learn something new. I would like to thank LP for the advice and patient guidance I’ve received, and I acknowledge there is always room for improvements and refinements in all project work. I’m hopeful to see what the next project has in store.

Leave a Reply