A Collection of Consciousness / End of Semester Project

Project Brief: A Collection of Consciousness utilises the method of question and answer to seek out one’s subconscious – psychological and emotional state alike. It then shows the participant a visual and auditory representation of his subconscious.

The set up: A total of 2 screens – one huge projector screen, and another smaller computer screen to look at for the questions. There is also a 3-step ‘dance pad’ for participants to select their answers.

2016-04-21 17.58.29

 

An example of a computer graphic shown on screen:

2016-04-21 17.58.42

 

The ‘dance pad’ was situated right in the centre – enjoying a vantage point of the screen. However, participants had to continually turn their head sideways to look for the questions.

2016-04-21 17.57.20

 

Initial idea:

2016-04-24 03.18.50

Initially, I planned on having the laptop right in front of the screen, but decided against it as it would block the visuals. Also, I planned on using motion sensing to trigger the selection of choices, but decided against it – past experience with it has taught me that different lighting would affect the sensing, and motion sensing as a whole might not be very accurate. Hence, I decided to change to using a touch sensor as my item of choice to trigger off the feedback.

 

Questions

I also planned to have a total of 20 questions at the start. Later, I scaled it down to 7 questions as the patch was becoming too large. I based the style of my questions on the ISFG Personality test. I also researched on the different determinants of one’s consciousness, or to be more accurate, psychological state and came up with the following:

2016-04-24 03.18.30

Thus, I decided to base my questions on these 3 factors:
1. Mental Wellness
2. Emotional Wellness
3. Intellectual Wellness

I do understand that the limited questions might not raise an accurate determination of one’s psychological state, but at present it manages to cover all 3 factors as a whole.

Here are my initially planned 10 ideas:

  1. How are you feeling today? Optimistic (faster), no opinion (slower), disengaged (more jazzy)
  2. You do not mind being the centre of attention. No, maybe, yes (change of beat)
  3. You get tensed up and worried that you cannot fulfil expectations (1-3) (change of colour and size, and zamp)
  4. You have cannot finish your work on time, what do you do? You: Continue working on it hard past the deadline, give up, blame and berate self (ADD ON NEW SOUNDTRACK, faster, slower, becomes white noise etc)
  5. Choose one word. I, me, them (goes down to nil, pale, sound slowly goes down to a steady thump)
  6. You often feel as though you need to justify yourself. Disagree, no opinion, agree
  7. You are _____ that you will be able to achieve your dreams.
    Optimistic, objective, critical
  8. Your travel plans are usually well thought-out. Disagree, neutral, agree
  9. Do you know what you want over the next 5 years? Yes, it’s all planned out, gave it some thought, will see how it goes
  10. You tend to go with the crowd, rather than striking it out on your own. Disagree, mix, agree
  11. Do you like what you see? Yes, No but I want to continue improving, no I wish I could restart
  12. Only you yourself know what you want. Do you see yourself? -> restart

I planned for the user to finish playing the entire round of questions, and the different answers would add on different, unique layers onto the graphic – creating a different, unique shape and colour for each player at the end of the game.

Technically, I used jit.gl.text and banged the questions (in messages) at intervals.

Screen Shot 2016-04-25 at 7.39.11 AM

Here is a small portion of my subpatch for the questions, which were to be shown on the computer screen later during the actual artwork.

 

Graphics

The changes in shape were made by sending messages (eg. Prepend shape – torus) to each individual attrui. In brief, these were altered: shapes, colours, how fast or slow the graphics ‘vibrate’, the range and scope it expands until, and the ‘quickness’ or ‘slowness’ of how fast it moves. Graphics was also rendered and altered in real-time.

To create 3D graphics, jit.gen, jit.gl.gridshape, jit.gl.render, jit.gl.mesh and jit.gl.anim were used. While I initially planned to use jit.gl.mesh, and plot out my own points to be animated (creating a more abstract/different/random shape), I spent a total of 5 days to figure it out, but failed to, hence the final project turned out different from expected. However, I am still pleased with the final outcome.

Screen Shot 2016-04-25 at 7.29.13 AM Screen Shot 2016-04-25 at 7.29.49 AM Screen Shot 2016-04-25 at 7.29.54 AM

Here is my patch for reference.

The graphics was the most challenging portion of my entire project, making or breaking it. Perhaps it was not the most brilliant idea to foray into the untouched lands of openGL, but as I have always been interested in 3D graphics as a whole, it became an interesting experience for me.

 

Sensors/Teabox

Screen Shot 2016-04-25 at 7.43.59 AM

3 dance pads were used, and allowed for participants to send in their chosen responses to the questions to MaxMsp. Only 3 options were allowed, as it seemed more intuitive to the common user.

During the actual tryout on friday 22nd April, I realised that the trigger was far too sensitive, and questions were gone through far too fast. Hence, on the afternoon, I further added the below objects to slow down the sensing:

Screen Shot 2016-04-25 at 7.43.41 AM

The counter value is to be changed – depending on how fast or slow you prefer the trigger to be.

 

Sounds

As the graphic rendering and the sounds were tied between each other, I decided on altering the sounds to firstly: give the sense of being enveloped in the whole experience, and also to further tie up the connection between sound and graphics. As aforementioned, the vibration of the graphics is with accordance to how loud, in terms of decibels, of the sound itself.

A total of 3 sounds were used. The longer the participants played, the more ‘rich’ the sound becomes: it becomes more shrill (higher pitch), and the speed of the soundtrack decreases, just to name a few. The longer you play, the more unorganic and synthetic the sounds become. It ties in with the visuals, which starts of as a red, beating object (that reminds one of the heart), but it starts to gradually take on inorganic, abstract forms. This was symbolic of the birth of a lifeform – firstly, you start off as a beating heart, but life’s experience will gradually shape you into becoming a unique being.

Here are the 3 sounds used:

 

 

See the final artwork in action:

Cat in the box! / Phigets Servo-motor controller

image1

My little cat! Created out of recycled cardboard, with a mishmash of different scotch tapes. As in outer side of the cardboard has graphics printed against it, I decided to invert it, thereby cutting the box up and attaching them together to create my very own box.

IMG_3323 copy

Strings pull the lid close, and the small piece of cardboard pushes the lid up. The cat itself was a makeshift shape of a cat.

IMG_3324 copy

As seen. In total, I used 2 servo motors – one to control the cat, the other to control the lid. Perhaps, with better crafting skills, I would be able to cut the number of servo motors down to one. To do this however, I require a larger box – currently, the box is palm-sized – and more ‘attachments’ to attach the cat to the handle/gear that pushes the lid open. To simplify things, I decided to do without it.

IMG_3325 copy

IMG_3326 copy

Peekaboo! The cat comes out a few milliseconds, after the lid opens.

IMG_3327 copy

IMG_3330 copy

Attaching the cat box to MaxMsp via a cable.

IMG_3341 copy

The inner workings of the cat box – one servo motors is raised in the air, the other attached to the ground.

When one speaks to the box, the lid opens – but no cat in sight! For this is sure some shy cat. Hence, speak louder, but what you say does not matter. The cat will appear for a short while, then disappear, if the threshold volume has been reached. Otherwise, only the lid will open.

 

See it in action:

In hindsight, I would have included an additional feedback – where the cat would mew back at the user, via an added soundtrack into the max patch. Pertaining to the documentation, I would have also put in human interactions (which I forgot to record earlier) – perhaps, of the user drumming the lid of the box, or attempt to catch the cat.

 

Patch used:
Finalised Patch Cat in the box

Lights, action! / Gyroscope, lights and sound

This prototype is a further development of the previous ‘swish the sound’, with the addition of chauvet lights.

When the gyroscope is tilted at an angle, there are two responses:

1. Sound is played at the angle the gyroscope is tilted at, and

2. Red light intensifies at the corner the gyroscope is tilted at, washing out the green

Sound is created by using ambisonic, while the control of the light was made by scaling the x and y coordinates of the gyroscope.

Screen Shot 2016-03-28 at 9.23.14 PM

Screen Shot 2016-03-28 at 9.23.39 PM

Scaling of the gyroscope was slightly different (ie. improved) from the previous sound/graphics/gyroscope patch. Now, the greater the tilting of the angle, the greater the intensity of the red lights. However, several improvements could be made:

  • alike to ambisonics, which had a smoother transition when the gyroscope tilt changes, transition between the different chauvet lights could be smoothed out.
  • perhaps the intensity of the ‘chosen’ chauvet light could also be dimmed – this I tried, but could not successfully manipulate the lighting such that it stopped blinking (ie. setting the minimum threshold)

Idea Development (Final Project)

Just putting it here for recording purposes! Will work further on the project idea.

Idea:
Labyrinth
My idea comprises of a physical maze, where a projected image of the user running will be projected on the physical labyrinth itself. The aim of this game is to catch the doll in question, which will physically be present, and mobile via the use of motors, in the labyrinth using sensor-automated responses. Upon reaching the end goal of catching the doll, the doll will stop moving.

Project mapping will be conducted on the doll in addition to the user’s running figure, to give the doll a living feel (as we project her face, and her various emotions) in an attempt to increase feedback given to the user. The longer the user takes to catch the doll, the more the doll’s face will morph unrecognisable.

The doll’s facial features will be inspired by the Japanese wooden doll.

12516261_1084485504934929_708975986_n12443093_1084485581601588_29574177_n
Projection mapping of the user will comprise of a stored footage of a human stick figure (the avatar). In order to trigger movement of the user’s avatar, the user has to stand on 4 square boxes, similar to the dance sectors of arcade dance games.

Feedback:
– find a reason why the doll moves
– Reason why you want to mix physical & projection together
– Maybe buy remote control call put doll on it (easier to do than Programme it)
– Control the maze will be an issue
– Maybe the maze will be projected
– Sphero ball to replace the doll. SDK. SPRK edition, figure out how to access the code
– Feedback when touching walls etc

Swish the sound! / Documentation, Process

A week ago, we had our first experience matching the gyroscope’s movement with the amplification of 4 different speakers – one at each corner of the room.

Here is the previous patch I did, which matched the gyroscope’s pointed direction (top right, bottom right, top left, bottom left), to triggering the speakers in the room, corresponding to their location. For e.g., point top right will trigger the top right speaker. When triggered, the speaker will switch on, when not triggered, the sound from that particular speaker will switch wholly off.

Screen Shot 2016-02-29 at 9.38.26 PM

Comments from the ground mentioned that perhaps, utilising the volume instead of directly switching the said speaker on/off will allow for a more ‘flow-y’ effect when switching to and fro different speakers. At current, the speakers were discrete: individually separate and distinct.

In addition, the randomising effect of coloured rectangle was indeed distracting. Below is a sneak peek into how it looked like:

One perplexing issue with the (x,y) values was that it was not stable enough, such that the distinction between the third and fourth speaker was not clear enough. Hence, switching between speakers may not be accurate enough for 2 corners.

Perhaps, the curve followed a log curve, instead of a linear function, hence by simply isolating particular sections of the x or y section and extrapolating it with relation to the speakers remain inaccurate.

From here, I decided on trying to covert the log curve into a linear curve, by utilising angles. I used this equation:

‘If tanθ = b/a, then θ = tan^-1(b/a)’

b being the side of the triangle opposite the angle, and a being the side of the triangle gating the side of the unknown angle. However, I fixed the starting ‘corner’/tip of the unknown angle at a given point in x, so one is able to differentiate between angles among all 4 quadrants.

locate820160229061157

 

Meanwhile, please refer to the below patch:

Screen Shot 2016-02-29 at 9.24.29 PM

I used ‘atan’ to find the angle in radian, after which I converted it back to degrees by multiplying it by 57.2958. Hence, by ‘split’, I tried to match each angle to the ‘gain’, or the volume of each soundtrack. I also attempted to put in 4 different soundtracks to correspond with the 4 speakers (which is also easier to identify which speakers were playing), but ultimately decided to simply stick to 1 soundtrack. Each sound however, was individually recorded from real-life.

However, the angles, while calculated correctly, still tended to jump around, making the change in volume for all speakers jittery. Hence, for recording purposes, I decided to first stick to my initial patch where each speaker was turned on individually, but will continue troubleshooting the angles at a later date. Potential reasons for this jump include: the extrapolation of the angles were too small/huge, hence it was too jittery? Else, it could be that the ‘boundaries’ for the graph was too huge/small, hence the change in angles were too steep/quick.

As for my graphics, I decided to play with jit.gl.gridshape, to create 3D shapes. My intention was to have a sphere pivoting in 3D space. However, in playing around with the z-axis, it was difficult to specifically alter the x,y coordinates to move the z-axis. Hence, I decided to focus on the 2D visualisation of the sphere instead. Initially, it worked perfectly, with the sphere moving in the direction of the gyroscope. Despite my initial success, an unknown error cropped up the next day, and I could not get the sphere to change its position. I also played around with jit.gl.lua, lua being a scripting language which could be input into max msp. I wanted to use the x,y coordinates to replace mouse click, which activated the graphics within the jit.window, but was unable to figure out the mouse click function, which seemed to differ from the mouseclick from jit.lcd.

Therefore, I decided to stick to what I did initially: use jit.lcd to draw a moving rectangle. I would instead fix the perimeters and colour of the rectangle this time round, so that the graphics would not be too flashy.

“Music Instrument” [the Tun-tun]: Final Product / Assignment 2

thumb_IMG_0992_1024

thumb_IMG_0993_1024

thumb_IMG_0994_1024

thumb_IMG_0995_1024

thumb_IMG_0998_1024

Depicted above is the final prototype of the tun-tun.

Functions are as stated (from top-down):

  1. Tapping top of head – triggers a beat once
  2. Pulling of left ear – controls the volume, and triggers the melody once
  3. Pulling out/pressing hard on tip of tongue – triggers a voice on repeat, pull it out again to switch it off
  4. Slide the “voice box” up and down – change the pitch of the sound effect

All sounds are recorded in real-life; however, they do not sound melodious when mixed together. New sounds can be input(ed) to replace the current sounds.

Physically speaking, the product stands at my chest-level.

thumb_IMG_1002_1024

thumb_IMG_1003_1024

Max MSP patch as depicted below:

Screen Shot 2016-02-15 at 6.56.41 PM

See it in action:

 

“Music Instrument” [the Tun-tun]: Prototype / Assignment 2

A singing head!

20160201090933

tuntun

Image of tun tun taken from here

Does it not remind you of a tun-tun (pig-stick used by the Iban people in Borneo/Malaysia to lure pigs into traps)?

Much physical resemblance between the sketch draft and the actual object; yet inspiration was not drawn from the tun tun. Sole commonality remains their names.

My project, aptly named “Tuntun”, features a sphere-shaped human-like head, where controls are placed around the head, eg. mouth, top of head, ears, to mimic a human making sound with his own facial features.

Below shall show a sketch of the areas with sensors:

Sketch sensors

At current, the ‘head’ is not placed in the order order and position. Further improvements in the patch are left to be desired.

20160201090951 20160201091051 20160201091103

The patch is currently incomplete, but here is a quick insight into some parts of it:

I used Gizmo~, Buffer~, Groove~ in replacement of playlist. Certain tweaks are required – for instance, the song abruptly stops playing when the trigger switches the toggle off. I am trying to include a timer, or delay, to allow for the entire soundtrack to play before it switches off.

“Radio” [the Singing Jacket] / Assignment 1

Finalised Patch(the singing jacket)

Use of controls to ‘play a radio’ from a playlist. Overall, there will be background music (Soincidence soundtrack) playing throughout the entire duration, while toggling various aspects of the sensors (jacket) will trigger an additional sound to be played, as seen from the playlist above.

Sensors/Feedbacks:

  1. Pressure Sensor (large) placed in pocket:
    Switch it on and off, pause, resume

    whatsapp 3

  2. Bending Sensor placed at wrist:
    Bend Wrist to change tracks

    whatsapp 2

  3. Gyroscope placed at nape of neck:
    Function 1 – [Front/back] Bend back and forth to change volume (The more you bend, the louder it is)
    Function 2 – [Right/left] Bend to your right to change pitch (high pitch with more obvious bend)

    whatsapp 1

Combine all actions and create your own unique song.

 

Comments, and reflections:

  • Prof Demers commented that having the actions not ‘fixed’ by physical boundaries will allow for greater freedom, and subsequently, more ‘fun’ in playing with the jacket. However, it is risky as the feedback is harder to control
  • Creating 2 different feedbacks for the gyroscope is not ideal: while activating one function, the other function is also inadvertently activated (unwanted feedback)
  • Difficult to control 2 feedbacks which requires somewhat similar actions to activate (for gyroscope to change pitch and volume)
  • Swinging hands to activate bending scope was unnatural
  • In reflection, perhaps limiting the scope, or narrowing the threshold for activation would help control the feedback, and simplify ways to activate the feedback
  • I could incorporate the zip/hood of the jacket, etc. zipping the zipper, or wearing the hood
  • Perhaps some sensors could be placed on the body of the user, instead of solely on the jacket as at current

See it in action:

 

Assignment 2: Opto-Isolator ◔_◔

Process:

Webcam tracks face > x and y coordinates of face > link x coordinates to video frame

This patch hit numerous issues, when played on my laptop. The video file couldn’t be read unless in avi format, but placing it in avi format makes max crash. After which I had to use the school’s macs to power up my patch file.

Patch the roving eye

Figure 1: My patch

Issues with patch:

  • Lagging in output video
  • Face tracking not accurate enough
  • patch possibly too complex, hence laggy?