A Collection of Consciousness / End of Semester Project

Project Brief: A Collection of Consciousness utilises the method of question and answer to seek out one’s subconscious – psychological and emotional state alike. It then shows the participant a visual and auditory representation of his subconscious.

The set up: A total of 2 screens – one huge projector screen, and another smaller computer screen to look at for the questions. There is also a 3-step ‘dance pad’ for participants to select their answers.

2016-04-21 17.58.29

 

An example of a computer graphic shown on screen:

2016-04-21 17.58.42

 

The ‘dance pad’ was situated right in the centre – enjoying a vantage point of the screen. However, participants had to continually turn their head sideways to look for the questions.

2016-04-21 17.57.20

 

Initial idea:

2016-04-24 03.18.50

Initially, I planned on having the laptop right in front of the screen, but decided against it as it would block the visuals. Also, I planned on using motion sensing to trigger the selection of choices, but decided against it – past experience with it has taught me that different lighting would affect the sensing, and motion sensing as a whole might not be very accurate. Hence, I decided to change to using a touch sensor as my item of choice to trigger off the feedback.

 

Questions

I also planned to have a total of 20 questions at the start. Later, I scaled it down to 7 questions as the patch was becoming too large. I based the style of my questions on the ISFG Personality test. I also researched on the different determinants of one’s consciousness, or to be more accurate, psychological state and came up with the following:

2016-04-24 03.18.30

Thus, I decided to base my questions on these 3 factors:
1. Mental Wellness
2. Emotional Wellness
3. Intellectual Wellness

I do understand that the limited questions might not raise an accurate determination of one’s psychological state, but at present it manages to cover all 3 factors as a whole.

Here are my initially planned 10 ideas:

  1. How are you feeling today? Optimistic (faster), no opinion (slower), disengaged (more jazzy)
  2. You do not mind being the centre of attention. No, maybe, yes (change of beat)
  3. You get tensed up and worried that you cannot fulfil expectations (1-3) (change of colour and size, and zamp)
  4. You have cannot finish your work on time, what do you do? You: Continue working on it hard past the deadline, give up, blame and berate self (ADD ON NEW SOUNDTRACK, faster, slower, becomes white noise etc)
  5. Choose one word. I, me, them (goes down to nil, pale, sound slowly goes down to a steady thump)
  6. You often feel as though you need to justify yourself. Disagree, no opinion, agree
  7. You are _____ that you will be able to achieve your dreams.
    Optimistic, objective, critical
  8. Your travel plans are usually well thought-out. Disagree, neutral, agree
  9. Do you know what you want over the next 5 years? Yes, it’s all planned out, gave it some thought, will see how it goes
  10. You tend to go with the crowd, rather than striking it out on your own. Disagree, mix, agree
  11. Do you like what you see? Yes, No but I want to continue improving, no I wish I could restart
  12. Only you yourself know what you want. Do you see yourself? -> restart

I planned for the user to finish playing the entire round of questions, and the different answers would add on different, unique layers onto the graphic – creating a different, unique shape and colour for each player at the end of the game.

Technically, I used jit.gl.text and banged the questions (in messages) at intervals.

Screen Shot 2016-04-25 at 7.39.11 AM

Here is a small portion of my subpatch for the questions, which were to be shown on the computer screen later during the actual artwork.

 

Graphics

The changes in shape were made by sending messages (eg. Prepend shape – torus) to each individual attrui. In brief, these were altered: shapes, colours, how fast or slow the graphics ‘vibrate’, the range and scope it expands until, and the ‘quickness’ or ‘slowness’ of how fast it moves. Graphics was also rendered and altered in real-time.

To create 3D graphics, jit.gen, jit.gl.gridshape, jit.gl.render, jit.gl.mesh and jit.gl.anim were used. While I initially planned to use jit.gl.mesh, and plot out my own points to be animated (creating a more abstract/different/random shape), I spent a total of 5 days to figure it out, but failed to, hence the final project turned out different from expected. However, I am still pleased with the final outcome.

Screen Shot 2016-04-25 at 7.29.13 AM Screen Shot 2016-04-25 at 7.29.49 AM Screen Shot 2016-04-25 at 7.29.54 AM

Here is my patch for reference.

The graphics was the most challenging portion of my entire project, making or breaking it. Perhaps it was not the most brilliant idea to foray into the untouched lands of openGL, but as I have always been interested in 3D graphics as a whole, it became an interesting experience for me.

 

Sensors/Teabox

Screen Shot 2016-04-25 at 7.43.59 AM

3 dance pads were used, and allowed for participants to send in their chosen responses to the questions to MaxMsp. Only 3 options were allowed, as it seemed more intuitive to the common user.

During the actual tryout on friday 22nd April, I realised that the trigger was far too sensitive, and questions were gone through far too fast. Hence, on the afternoon, I further added the below objects to slow down the sensing:

Screen Shot 2016-04-25 at 7.43.41 AM

The counter value is to be changed – depending on how fast or slow you prefer the trigger to be.

 

Sounds

As the graphic rendering and the sounds were tied between each other, I decided on altering the sounds to firstly: give the sense of being enveloped in the whole experience, and also to further tie up the connection between sound and graphics. As aforementioned, the vibration of the graphics is with accordance to how loud, in terms of decibels, of the sound itself.

A total of 3 sounds were used. The longer the participants played, the more ‘rich’ the sound becomes: it becomes more shrill (higher pitch), and the speed of the soundtrack decreases, just to name a few. The longer you play, the more unorganic and synthetic the sounds become. It ties in with the visuals, which starts of as a red, beating object (that reminds one of the heart), but it starts to gradually take on inorganic, abstract forms. This was symbolic of the birth of a lifeform – firstly, you start off as a beating heart, but life’s experience will gradually shape you into becoming a unique being.

Here are the 3 sounds used:

 

 

See the final artwork in action:

ArtScience Musuem Reflections

It was truly an eye-opener to see interactive works in real life, and to connect the examples that have been shown in classes to the works experienced in teamlab’s exhibition. The works were all very engaging – user-friendly, family-friendly, with the strong use and integral usage of media in the works. The exhibition items also strongly included some elements of gameplay, a compelling way to induce interactivity, especially so when the museum was geared towards families and the general public.

 

The Creation of Space

All artworks utilised lights, and sounds as a medium for narration. The use of stunning visuals, especially in the exhibition Flowers and People, Cannot be Controlled but Live Together – A Whole Year per Hour was particularly strong, and truly brought the space to life. The reliance on space, and visuals, and the accompanying background sound, highly engaged sight, sound, and temporality of users. While the interactivity was slow, and not obvious if one does not take a closer look (in terms of how fast the flowers wither upon touch), in this instance, content (of the flowers, nature) tied it with the narrative seamlessly, and it was a pleasant experience as a user.

 

Visualising Computer Data

In many interactive media art pieces, I realised that visualising computer data is commonly utilised. Teamlab’s exhibit Universe of Water Particles, portrays a classic example where computer graphics was used to create a visually appealing artwork, though it has a lower amount of interactivity in comparison with the other artworks in the permanent exhibit.
The juxtaposition of nature and the unnatural (ie. computerised work) is clearly seen in this work. Part of the natural landscape, a waterfall is rendered using computer graphics. It puts out the question that whether nature is no longer what it seems, and whether our natural landscape has been gradually overtaken, and deemed replaceable by artificial data. The strife between nature and the unnatural will always be present, and at times, in interactive media works, artists try to input realistic items from our real surroundings into the works, possibly edging into this debatable topic.

The Garden of the Forking Paths Reflection

The Garden of Forking Paths is an interesting read that despite its subject matter, follows a linear narration. In the novel, Ts’ui Pen alleges that time is relative; and that at any one point of time, or period, there is an infinite series of possibilities, and that these different possibilities are all interlinked, “diverging, converging and parallel times”. He states that time exists like a web, and that at any one moment, every possibility of action is possible. His proposed theory remains abstract, yet it proves to show how the unpredictability of man, coupled with the curious linearity or non-linearity of time interplays.

Within the text itself, the motif of time frequently announced through Richard Madden’s hot pursuit of the author, holds a central place in the text. Time is very much regulated by the author, through his tedious note-taking. Similar to the text itself, the use of the word ‘time’ is rare, except in the later half of the text where Albert explains. The main story follows an overall linearity – the author escapes from Richard Madden, who follows on hot pursuit. Then, he finds Albert and discusses the novel with him, and kills him later when Richard Madden catches up with him. However, in this content, there are many different ‘paths’ of possibilities by different playing characters, and it can be almost seen as a game.

For instance, Richard Madden could have chosen not to catch the author, or could have formulated a different method rather than pursuing him ala cat and mouse style. Or, Albert could have chosen not to discuss the novel with the author, if he had sensed the author’s murderous intent. Also, if the author had not made a conscious plan to murder Albert afterwards, Richard Madden might not have arrested him. In all scenarios, there is always an option, and the availability of this option creates a symbolic labyrinth, with some choices actively available and conspicuously absent to the user at the same time. Thus, it can be argued that time, like these various possibilities, does not follow a linearity, but rather, its interweaving and connectivity, to some extent, gives space for a conceptual space to be formalised.
Lastly, the text can be said to be an interactive generative style. Interactivity comes through the choices of options, and possibilities, or routes are then generated. It is in this unique sense of how it is generated that goes against what we have been termed to recognised, a linear sense of progression where the narrative is somewhat fixed. Without a fixed route, the option of choices might even bring you back to the past, hence the text can have a linear style and a non-linear style concurrently.

Gone Home Reflections

Gone Home is a first person adventure style exploration game, that utilises the new media to focus on a strong narrative plot. A divergence from traditional video games despite the story being set in the 1990s, Gone Home remains very much at home with its new media platform, a distinctive crossbreed that utilises the best characteristic of its precedents as part of its narrativity, and interactivity.

 

The reader as an active and passive participant.

The reader in Gone Home takes on numerous roles. The viewer, reader, and participant: he participates, he is able to make choices on the paths he takes, but ultimately the guided voice narration of Sam’s journal helps to shape the entire experience. Subjective to his sequence of choices, the player can create his own version of narration. His participant is important, yet he remains a passive user, unable to change the gist of the whole story. It is in this interactivity however, that draws the player closer to the Sam’s story and guides the revealing of this story to conform to his likes and wants. There are multiple storylines, Sam getting to know Leona, or going to camp etc, clearly interlinked in this seemingly linear form of narrative – but ultimately, these multiple stories add layers of emotions to Sam and Kaitlin. Indeed, it helped to make the story more satisfying (on the emotional level) and engaging to me as a player.

 

Realism in Gone Home

Characteristic of the sophisticated graphics system of today, Gone Home employs a good amount of realism, to the extent of being photo-realistic in its works that helped to draw the players closer to the realistic possibility of the narration, and at the same time makes the story much more engaging. One interesting characteristic feature of the surrounding, clickable objects is that while it attempts to draw itself closer to the players, at the same time, the limitation of the narrative (that it is one-sided, and the ability of the player to feedback to it further beyond the given, voiced narrative almost nil) treads a thin line in terms of narrativity. However, the multitude and huge amount of information helps to negate this feature, whereby the player has to weed out the given information and indirectly feedback to the narrative itself.

 

Through fostering of active participation, while utilising realism in the visual narrative, Gone Home seemingly follows a linear narration, yet a certain fragmentation of narratives via the sequence of choices differs it from the traditional video game. It is a breakthrough against traditional video games, yet remains the medium of a traditional story, with some semblance of realism.

Cat in the box! / Phigets Servo-motor controller

image1

My little cat! Created out of recycled cardboard, with a mishmash of different scotch tapes. As in outer side of the cardboard has graphics printed against it, I decided to invert it, thereby cutting the box up and attaching them together to create my very own box.

IMG_3323 copy

Strings pull the lid close, and the small piece of cardboard pushes the lid up. The cat itself was a makeshift shape of a cat.

IMG_3324 copy

As seen. In total, I used 2 servo motors – one to control the cat, the other to control the lid. Perhaps, with better crafting skills, I would be able to cut the number of servo motors down to one. To do this however, I require a larger box – currently, the box is palm-sized – and more ‘attachments’ to attach the cat to the handle/gear that pushes the lid open. To simplify things, I decided to do without it.

IMG_3325 copy

IMG_3326 copy

Peekaboo! The cat comes out a few milliseconds, after the lid opens.

IMG_3327 copy

IMG_3330 copy

Attaching the cat box to MaxMsp via a cable.

IMG_3341 copy

The inner workings of the cat box – one servo motors is raised in the air, the other attached to the ground.

When one speaks to the box, the lid opens – but no cat in sight! For this is sure some shy cat. Hence, speak louder, but what you say does not matter. The cat will appear for a short while, then disappear, if the threshold volume has been reached. Otherwise, only the lid will open.

 

See it in action:

In hindsight, I would have included an additional feedback – where the cat would mew back at the user, via an added soundtrack into the max patch. Pertaining to the documentation, I would have also put in human interactions (which I forgot to record earlier) – perhaps, of the user drumming the lid of the box, or attempt to catch the cat.

 

Patch used:
Finalised Patch Cat in the box