[Interactive 2] Milestones – 15 Apr

by Putri Dina and Hannah Kwah

Project Proposal  |  28th Mar  |  4th Apr  |  11th Apr  |  12th Apr  |  15th Apr  |  16th Apr  |  17th Apr  |  Final Documentation

Main Objectives
– Connect all 8 Touch sensors
– Duplicate playlist for each of them
– Implement an audio distortion when Touch 5 is combined with all other Touches
– Get a visual to react to sound

Max Patch
Firstly, we had to get a unique value for all Touch sensors, Touch combo and Touch + Distort combo. We made a table below to make our lives easier.

Sensor No. Location Playlist No. Unique Value/Integer
Touch 1 Left Cheek Playlist 1 1
Touch 2 Right Cheek Playlist 2 2
Touch 1 + 2 Both Cheeks Playlist 3 3
Touch 3 Head Playlist 4 4
Touch 4 Back Playlist 5 5
Touch 3 + 4 Head + Back Playlist 6 9
Touch 5 Tailbone Playlist 7 10
Touch 6 Stomach Playlist 8 20
Touch 5 + Others Tailbone + Others Distortion 11, 12, 14, 15, 30, 26, 27
Touch 7 Foot Playlist 9 16
Touch 8 Tail Playlist 10 17

Currently, each ‘playlist’ had its own sfplay~ and speaker output so the sounds would overlap with each other if they were touched simultaneously. Another problem that we encountered was that, it was impossible for someone to touch two sensors at the same time for the combo to occur. Our fingers would always touch one of the two sensors first. Therefore, Value 3 and either Value 1 or 2 would play at the same time (or overlap).

Audio Distortion

Putting aside the two problems listed before, we decided to proceed further and return to them at a later time.

The input (2) came from any of the audio that was triggered in the main patch. It was connected to tapin~ with a size of 20 seconds to create a delay line. We did not specify the time for tapout~ (basically set it to zero) which means there would not be any delay at all.

A multiply object was created and set to zero as we did not want any sound to go through. Two line~ objects were placed to control two things: one was to control the feedback amount while the other was to control the delay time (20secs).

Lastly, messages. For the first message : destination 1, time travel 1ms, stay at 1, for 5 seconds, and goes to 0 at 1ms. Second message : going at 250ms, time travel 1ms, stay at 250ms, for 5 seconds, and goes to 0 at 1ms.

Visual Reaction
Our final part of the patch was a visual response. We referenced from Naoto Fushimi’s Reactive Music patch (link here) and tweaked it to fit our project’s concept. His patch was very complex but we studied it and used whatever necessary. Our goal was to create a sphere to represent the cat’s core. Depending on how aggressive the audio was played, the sphere would enlarge, which symbolized the cat’s level of temperament when being triggered.

We noticed that the visual will not appear unless an audio was played so we created a neutral sound wave and muted it. Our inputs came from the audio output received from the main patch shown above.

jit.noise was implemented to create the grain effect on our visual.  It was blended with jit.gl.gridshape (we used a sphere)  and jit.gl.mesh (geometric surface with spatial coordinates).

Published by


Believes in creating works that someone can not only see or touch but be part of, to be within them.

Leave a Reply