Lights, action! / Gyroscope, lights and sound

This prototype is a further development of the previous ‘swish the sound’, with the addition of chauvet lights.

When the gyroscope is tilted at an angle, there are two responses:

1. Sound is played at the angle the gyroscope is tilted at, and

2. Red light intensifies at the corner the gyroscope is tilted at, washing out the green

Sound is created by using ambisonic, while the control of the light was made by scaling the x and y coordinates of the gyroscope.

Screen Shot 2016-03-28 at 9.23.14 PM

Screen Shot 2016-03-28 at 9.23.39 PM

Scaling of the gyroscope was slightly different (ie. improved) from the previous sound/graphics/gyroscope patch. Now, the greater the tilting of the angle, the greater the intensity of the red lights. However, several improvements could be made:

  • alike to ambisonics, which had a smoother transition when the gyroscope tilt changes, transition between the different chauvet lights could be smoothed out.
  • perhaps the intensity of the ‘chosen’ chauvet light could also be dimmed – this I tried, but could not successfully manipulate the lighting such that it stopped blinking (ie. setting the minimum threshold)

Idea Development (Final Project)

Just putting it here for recording purposes! Will work further on the project idea.

Idea:
Labyrinth
My idea comprises of a physical maze, where a projected image of the user running will be projected on the physical labyrinth itself. The aim of this game is to catch the doll in question, which will physically be present, and mobile via the use of motors, in the labyrinth using sensor-automated responses. Upon reaching the end goal of catching the doll, the doll will stop moving.

Project mapping will be conducted on the doll in addition to the user’s running figure, to give the doll a living feel (as we project her face, and her various emotions) in an attempt to increase feedback given to the user. The longer the user takes to catch the doll, the more the doll’s face will morph unrecognisable.

The doll’s facial features will be inspired by the Japanese wooden doll.

12516261_1084485504934929_708975986_n12443093_1084485581601588_29574177_n
Projection mapping of the user will comprise of a stored footage of a human stick figure (the avatar). In order to trigger movement of the user’s avatar, the user has to stand on 4 square boxes, similar to the dance sectors of arcade dance games.

Feedback:
– find a reason why the doll moves
– Reason why you want to mix physical & projection together
– Maybe buy remote control call put doll on it (easier to do than Programme it)
– Control the maze will be an issue
– Maybe the maze will be projected
– Sphero ball to replace the doll. SDK. SPRK edition, figure out how to access the code
– Feedback when touching walls etc

Swish the sound! / Documentation, Process

A week ago, we had our first experience matching the gyroscope’s movement with the amplification of 4 different speakers – one at each corner of the room.

Here is the previous patch I did, which matched the gyroscope’s pointed direction (top right, bottom right, top left, bottom left), to triggering the speakers in the room, corresponding to their location. For e.g., point top right will trigger the top right speaker. When triggered, the speaker will switch on, when not triggered, the sound from that particular speaker will switch wholly off.

Screen Shot 2016-02-29 at 9.38.26 PM

Comments from the ground mentioned that perhaps, utilising the volume instead of directly switching the said speaker on/off will allow for a more ‘flow-y’ effect when switching to and fro different speakers. At current, the speakers were discrete: individually separate and distinct.

In addition, the randomising effect of coloured rectangle was indeed distracting. Below is a sneak peek into how it looked like:

One perplexing issue with the (x,y) values was that it was not stable enough, such that the distinction between the third and fourth speaker was not clear enough. Hence, switching between speakers may not be accurate enough for 2 corners.

Perhaps, the curve followed a log curve, instead of a linear function, hence by simply isolating particular sections of the x or y section and extrapolating it with relation to the speakers remain inaccurate.

From here, I decided on trying to covert the log curve into a linear curve, by utilising angles. I used this equation:

‘If tanθ = b/a, then θ = tan^-1(b/a)’

b being the side of the triangle opposite the angle, and a being the side of the triangle gating the side of the unknown angle. However, I fixed the starting ‘corner’/tip of the unknown angle at a given point in x, so one is able to differentiate between angles among all 4 quadrants.

locate820160229061157

 

Meanwhile, please refer to the below patch:

Screen Shot 2016-02-29 at 9.24.29 PM

I used ‘atan’ to find the angle in radian, after which I converted it back to degrees by multiplying it by 57.2958. Hence, by ‘split’, I tried to match each angle to the ‘gain’, or the volume of each soundtrack. I also attempted to put in 4 different soundtracks to correspond with the 4 speakers (which is also easier to identify which speakers were playing), but ultimately decided to simply stick to 1 soundtrack. Each sound however, was individually recorded from real-life.

However, the angles, while calculated correctly, still tended to jump around, making the change in volume for all speakers jittery. Hence, for recording purposes, I decided to first stick to my initial patch where each speaker was turned on individually, but will continue troubleshooting the angles at a later date. Potential reasons for this jump include: the extrapolation of the angles were too small/huge, hence it was too jittery? Else, it could be that the ‘boundaries’ for the graph was too huge/small, hence the change in angles were too steep/quick.

As for my graphics, I decided to play with jit.gl.gridshape, to create 3D shapes. My intention was to have a sphere pivoting in 3D space. However, in playing around with the z-axis, it was difficult to specifically alter the x,y coordinates to move the z-axis. Hence, I decided to focus on the 2D visualisation of the sphere instead. Initially, it worked perfectly, with the sphere moving in the direction of the gyroscope. Despite my initial success, an unknown error cropped up the next day, and I could not get the sphere to change its position. I also played around with jit.gl.lua, lua being a scripting language which could be input into max msp. I wanted to use the x,y coordinates to replace mouse click, which activated the graphics within the jit.window, but was unable to figure out the mouse click function, which seemed to differ from the mouseclick from jit.lcd.

Therefore, I decided to stick to what I did initially: use jit.lcd to draw a moving rectangle. I would instead fix the perimeters and colour of the rectangle this time round, so that the graphics would not be too flashy.

“Music Instrument” [the Tun-tun]: Final Product / Assignment 2

thumb_IMG_0992_1024

thumb_IMG_0993_1024

thumb_IMG_0994_1024

thumb_IMG_0995_1024

thumb_IMG_0998_1024

Depicted above is the final prototype of the tun-tun.

Functions are as stated (from top-down):

  1. Tapping top of head – triggers a beat once
  2. Pulling of left ear – controls the volume, and triggers the melody once
  3. Pulling out/pressing hard on tip of tongue – triggers a voice on repeat, pull it out again to switch it off
  4. Slide the “voice box” up and down – change the pitch of the sound effect

All sounds are recorded in real-life; however, they do not sound melodious when mixed together. New sounds can be input(ed) to replace the current sounds.

Physically speaking, the product stands at my chest-level.

thumb_IMG_1002_1024

thumb_IMG_1003_1024

Max MSP patch as depicted below:

Screen Shot 2016-02-15 at 6.56.41 PM

See it in action:

 

“Music Instrument” [the Tun-tun]: Prototype / Assignment 2

A singing head!

20160201090933

tuntun

Image of tun tun taken from here

Does it not remind you of a tun-tun (pig-stick used by the Iban people in Borneo/Malaysia to lure pigs into traps)?

Much physical resemblance between the sketch draft and the actual object; yet inspiration was not drawn from the tun tun. Sole commonality remains their names.

My project, aptly named “Tuntun”, features a sphere-shaped human-like head, where controls are placed around the head, eg. mouth, top of head, ears, to mimic a human making sound with his own facial features.

Below shall show a sketch of the areas with sensors:

Sketch sensors

At current, the ‘head’ is not placed in the order order and position. Further improvements in the patch are left to be desired.

20160201090951 20160201091051 20160201091103

The patch is currently incomplete, but here is a quick insight into some parts of it:

I used Gizmo~, Buffer~, Groove~ in replacement of playlist. Certain tweaks are required – for instance, the song abruptly stops playing when the trigger switches the toggle off. I am trying to include a timer, or delay, to allow for the entire soundtrack to play before it switches off.

Project Proposal 2: Reveal

Chat Roulette

Inspiration was drawn from chat roulette, an online site where users can chat with others anonymously, and at random. The most rudimentary method of communication, was through speech – hence, I thought of going back to the basics for this narration project.

 

The medium of choice remains an online website, where two users, at any one time, is able to log into the site. Their identity will remain anonymous, until they decide to reveal it. In summary, the website will adopt a common online ‘messaging’ interface.

20160127193129

There will be 2 boxes, where each user (who do not know each other) write into the box.

20160127193135

Each user will occupy each ‘station’.

20160127193140

Above the text boxes, there will be a camera input (from live webcam from each user).

Template

Edit: later, after writing this post, I decided to change my layout to that of the above, cutting down on ‘items’ which I have no need for.

The gist of this project is for each user to try and reveal the others. Initially, each webcam will be fully obscured. Over time, little ‘pixels’ to be revealed from each webcam at random. Such that majority of the webcam input remains hidden, increasing the tantalising factor. Users will be able to communicate with each other via the chat boxes, and they are free to take on another persona, or hide their true identity from the other.

Ultimately, what I want users to take away from the project is to play around with the cloak of anonymity, and make some mischief, or have had some fun playing around with it.

 

I am still exploring the idea, that perhaps, each user can force the other user to reveal more of what is shown in their webcam (via a toggle button), to increase the interactive factor.

Edit: Upon further discussion with my fellow classmates, perhaps moving the project in the direction of a game would give the game an enticing edge.

Further improvements to ponder on:

  • a timer to goad each users to quickly find out the identity of the other
  • a ‘puzzle’ like format, eg 3 x 3, where each block will be revealed periodically

 

Update: Ultimately, while I did not work on my own idea for my final project, it remains something I might want to work on in the future, and will shelf it for now.

In-class Assignment: Narratives Structure

Analyse an interactive narrative using a narrative layout/structure (eg. Monomyth a Hero’s Journey)

Plot Diagram: Graphic Novel for Hybrid Peugeot

plot-diagram

In brief, this interactive narrative comprises of a graphic novel, where users are able to scroll through the different scenes at their own pace. The narrative consists of a female spy, who undertakes great risk in fulfilling her mission. The website was part as part of an advertorial campaign for a Peugeot car, whose technical merits are humanized in the form of the female spy.

Beginning:

begin

The scene opens with a female, dressed seductively, the typical image of a female spy. She is in a dark room, when a troop of office workers burst into the room – she’s busted! Tension and excitement is build up by the viewers – who was she?

Graphic Novel

The female then runs away with her might – a conflict. More tension is built up. This is heightened by the conditions by which she runs away – crashing through the window, through a thorny patch of weeds, away from a pack of dogs, and even finally leaping onto a moving car.

Middle:

conflict

She is trapped! A gunmen is hot on her heels – taking aim and shooting at her, but luckily (or not), he misses. There is rising action in this particular scene – assisted by graphics which featured a moving shot comprised of various still images, breaking the monotony of stills.

End:

Finally, with a few more (exciting) leaps, she makes her escape into a building, and walks in confidently. Here, it signals the end to the audience, which was acknowledged in a text she sends – “Mission Accomplished”.

Graphic Novel 2

The narration ends with a shot of her backview, and her in a bedroom scene with her handsome beau lying in wait for her, for more ‘action’.

 


Another interesting website chanced on in my research:

http://www.ro.me/

Project Proposal 1: Who’s using your phone in the toilet?!

The formulation of this idea was based on my previous idea, the brushing of teeth, and influenced by iknowwhereyourcatlives. In my previous idea, I wanted to touch on the weariness brought about by repetition, and pull in an element of fun into this boring activity. However, I decided to focus on a different activity with a longer duration instead, which many do but most are shy (or not) to admit. Thus, I changed my idea to the usage of phones when one is sitting on the toilet, while answering’s nature’s call.

According to a survey commissioned by Kleenex, 3 in 4 Singaporeans use mobile phones in the toilet. Phones are then used to play games, watch videos, or even answer calls. However, bringing your phone into the bathroom, an unsanitary location, could risk bacteria and germ contamination. Of which, only an estimated 2 per cent sanitise their phones after doing the deed.

In Singapore, as part of a shyer Asian culture, where toilet talk is generally avoided, it would be interesting to bring this lesser-discussed topic to the surface. After all, revealing ‘secrets’ is a very exciting thing. I would like to utilise the world wide web as a medium for my idea.

 

Methodology:

Data is first extracted from twitter live streams for discourse analysis. Users who upload both hashtags (#phone, #toilet), or include in their twitter updates the phrase “toilet” and ‘phones” will pinpointed. Next, these statistics are compiled, and displayed visually as per below:

 

AqxmFqt1NUazdWEDXq3uYp6OoD3CksLPM6q2S2u_w6sZ

Each time a user uses the key words, a single dot will appear on the website screen. There is a timestamp at the bottom: hence, we can visually pick out which timings are users (who update twitter) most often on the toilet. Using mouseclick, the website user can interact with the dots on the website, which can be dragged around the screen. The website user is able to manually manipulate the data on the site.

AlDJg1aP_eU3WaWqkxIN0Bj4lRP6bKcmWFFAMQOpguJj

Areas where more dots coagulate, the background will turn towards a redder shade.

AiaVehRrq-n_NNglJITbuVJquhPf37dFH1nR3OeBQusH

In order to view other timeframes, users can point their mouse right/leftwards, and scroll to other timelines.

 

Proposing Topics | Eric Zimmerman: Four Concepts / Assignment 1

Reading: Narrative, Interactivity, Play and Games: Four naughty concepts in need of discipline, by Eric Zimmerman

It was a piece of writing that was not meant to be rushed through in a single read. In fact, I had to write out some notes, and categorise the information to better understand it:

Eric Zimmerman Reading Summary

Overall, it was a pretty interesting piece that summarised and clarified succinctly my initial impression of the four concepts.

 

Assignment: think about 2 topics which you find interesting and that potentially can be turned into interactive narratives, think about what type of interaction would be important for these topics (based on the reading’s definitions), post the ideas on OSS

Topic 1: Astrology

Regarding the astrology, focusing on the meta-interactivity and functional interactivity would be interesting. The zodiacs, an ancient science, which has both believers and non-believers, and I would like to explore this metaphysical construct of divinity based on astrology. To personify this topic, would be ideal in giving audiences a physical realization and better relation to the topic, inclusively targeting non-believers. This encompasses functional interaction, where perhaps, a change in astronomical signs could change the weather. Explicit interaction would be ideal, where the change in circumstances influence the person’s choices, interspersing with functional interaction.

Topic 2: the daily rituals of life: washing up & brushing teeth

I choose this idea as I wanted to explore a common, Everyday activity, and impose a greater narration into it. I wanted to stretch the boundaries of an action so simple and thoughtless, and test the boundaries of narration. For this topic, it was a discipline that adhered closer to stories than game – a narration that is patterned and repetitive – the action of first grabbing the toothbrush, followed by the steady squeeze of toothpaste on the bristles, culminating in a brushing action of the teeth. It would be interesting to explore a functional interaction, where perhaps, the audience uses his entire body to control the movement (sensed by sensors), by which requires precise & accurate movements. The rule of even a centimetre off would invalidate the movement. Cognitive and explicit interaction could be integrated, where the user makes conscious decisions of the movement, and recalls the actions (which may be scrambled up) in playing the game. However, at this stage, I still feel a sense of dissatisfaction as mentioned by Zimmerman, with my use of the mediums available, and will continue exploring more beyond what I have at present.

However, comments given upon consultation with the professor reflected:

*focus on narrative, rather than the physical output

*might have problems with interface so be sure to edit the end product

*astrology as a topic is interesting, but has been widely used

*physical product is possible too

Thus, more effort would need to be put in to further push and develop the ideas.