Category: Interactive Devices

Bubble-Down!

Fabian Kang and Zhou Yang, Bubble-Down!, 2018, laser-cut medium density fibreboard, sound-sensors, LED lights, powered by Arduino.

 

Concept

The idea is for players to use a sheet of bubble wrap to play the game as it was this touch and feel that we wanted to be integral to the interaction. Bubble wrap also becomes this expendable medium that has to be ‘refilled’ after each game.

Bubble wrap works like a physical button, having the haptic feedback of the contained air being squeezed and emission of a pop sound when it then bursts.

Bubble wrap is usually something that people press without much thought to it. And indeed it can be somewhat addictively mindless. Hence, we were wondering what if each press of the bubbles has to be considered very very carefully? With Bubble-Down! (2018), we invite players to battle it out in a minesweeper-meets-battleships game of suspense, interacting with bubble wrap in an unconventional way.

 

Gameplay

  1. Players will plant their bombs without the knowledge of their opponents. They should take note of those positions they have rigged up.
  2. Players will swap places.
  3. The game commences with players having 5 ‘lives’ each.
  4. Players will take turns to pop the bubble wraps. It is mandatory to pop once upon each turn. Should a light go on, the player loses a ‘life’.
  5. The game continues till either player has exhausted his or her 5 ‘lives’.

 

Documentation

Fabian Kang and Zhou Yang, Bubble-Down! Documentation, 2018.

Above is a short video to showcase the process underwent by our team from ideation to execution.

 

Design 

The main challenges faced was understanding how a sound sensor works and what inputs is Arduino recognizing.  We realized that although we tried getting an analogue input, Arduino was picking the bubble wrap ‘pop’ sound consistently as a value of 1023 (analogue inputs having only a maximum of 1024 values). This meant that the analogue inputs were no different from a binary digital input. It was either recognizing the reading as a sound or as silence.

Hence, much of the design was highly focused on the contact areas of the bubble wrap and the medium-density fiberboard (MDF). We had to ensure a layer of separation between the cubes that had ‘bombs’ in them or those that were without. MDF was the chosen material as it was able to allow the sounds to travel to the sound sensor. We realized that the sensor had to be directly in contact with the board surface, hence we improvised a solution involving clothspegs that will secure them as desired.

 

These are some process of the Design:

 

The design solution was to separate Part A from Part B which in turn had to all be separated from the table surface as well. This was done by the small pieces of Parts C that would be stacked to create stilts for the bases of the aforementioned parts. We also calculated the exact length of the ‘bombs’ so that that stick would cause contact from Part A to Part B.

 

 

 

 

 

 

        Without 'bomb'                      With 'bomb'

 

The key takeaway from this Final Project is that working with something like a sound sensor that is very depended on the environment and the interactive situation, one certainly needs to be wary of the efforts to adjust to the calibration requirements of the hardware. We did realise the immense difficulty at some point, but decided to push ahead simply because we liked the whole process loop of things. From haptic touch with the bubble wrap, popping sounds produced by it, being picked up by the sound sensors.  It felt right to endeavor towards this and stay true to the concept we had agreed upon.

 

 

Programming

This was the basic circuitry we were working with:

And we of course rigged it up to include 10 sound sensors and 10 lights, as well as to ensure the inputs from the sound sensors are individually directed to the outputs of the corresponding lights.

After we did that, we realized that it would be cumbersome to have the players move the actual sound sensors around the board, hence the design had to allow for the sensors to be static, which was when we incorporated the idea of the ‘bombs’ to be moved around instead of the sensors.

 

Extras

Lastly, it is always certainly about the process, like in these last timelapse videos we would like to share:

With that, I think we shall look forward to a fruitful second half of FYP year!

 

DOW 3 – Senses (Blind Smell Stick)

Peter de Cupere Blind Smelling Stick & a Scent Telescope (2012), in Rio de Janeiro.

  • Blind Smell Stick has a tiny bulb on the end with holes and smell detectors.
  • Scents reach your nose through the tube, helped by some mini ventilators, heating, and filters.
  • The dark glasses cancel out your sense of sight and you focus on smelling and finding your way through touching with the stick.

Image result for blind smell stick

Sensory replacement –  Sight >>> Smell (Sound/Touch)

Adding another sense to the focus of the blind man’s stick

“It can help blind people to find their way, or to prevent that they walk in a shit (they can smell it before).”

 

I felt that this piece created a new experience simply by bringing a less used sense to the fore of daily navigation. This places the user in an interesting dynamic as we are making a decision on where to walk or turn away from, based on what we smell.

Peter de Cupere presents a different take on the experience of losing one’s sight.

 

 

Special mention:

Derek Jarman Blue (1993).

Film that is not visual, but about the failing of the visual sense.

 

Presentation: Machine Learning

slides:

https://drive.google.com/open?id=1afXfHpiUGAPREHSEK8B3sks6LEo8AsT2sSCntXPlQk0

 

Principals

  • A field of AI that employs data collection to identify patterns and aid decision making
  • System / machine improves over time

Case Study – Voice Assistants

Goal: Imitate human conversation

Process: Learn to ‘understand’ the nuances and semantics of our language

Actions: Compose appropriate responses / execute orders

For example, Siri can identify the trigger phrase ‘Hey Siri’ under almost any condition through the use of probability distributions.

In the battle of Xiao Ai versus Siri, it was realized that due to the machine learning specific  to the cultural locality, Xiao Ai could function way better for the mainland China consumer. It knew how to send money to the user’s contacts on WeChat, whereas Siri only send a text message. It could also accurately find the user’s photos from an outing on a previous weekend with friends to upload to social media.

Case Study – Self Driving Cars

Goal: Imitate human driving

Process: Identify vehicles, humans and objects on the road

Actions: Make decisions for the movement of the vehicle based on scenarios presented

Waymo was a company started by Google. Machine Learning is showing much advancement in the cars ability to analyze sensor data to identify traffic signals, actors and objects on the road. This is allowing the car to better anticipate behavior of others. Hence they are getting closer to a real human driving experience powered by this machinery.

Waymo has started in their autonomous taxi service in Chandler, Arizona.

Future implications

  1. Internet of things: Enhanced personalization

Machine learning personalization algorithms will be able to build data about individuals and make appropriate predictions for their interests and behavior. For example, an algorithm can deduce from a person’s browsing activity on an online streaming website and recommend movies and tv series that will interest the person to watch. Currently, the predictions may be rather inaccurate and result in annoyance.

However, they will be improved on and lead to far more beneficial and successful experiences. Also, with unsupervised algorithms, it will be possible to discover patterns and complex data that supervised methods would not be able to. Without any direct human intervention, this will result in faster and more accurate machine learning predictions.

 

2. Rise of Robotic Services

The final goal of machine learning is really to create Robots. Machine learning makes possible robot vision, self-supervised learning, and multi-agent learning, etc.

We have seen the Henn na – or Weird Hotel in Japan where Robots are providing the entire service for all the tourists who stay there.

Robots will, one day, help to make our lives simpler and more efficient.

Conclusion

Machine learning is a really promising technology. If we can harness it for the good of humanity, this could drive great change in our quality of life.

 

 

 

POP Noise

Concept
  • Bubble wrap is like the simplest of buttons with haptic feedback.
  • The popping of the bubbles is irreversible.
  • Could we extend the act of POPping the bubbles – like how the record disk is left with a permanent imprint caused by the sounds recorded on them.

Artistic realm  Design realm
  • We are interested in the production of sound and the haptic feedback of bubble wrap.
  • We want to create a work that will be performative in nature.
Similar works, critique and differences / Inspiration sources

Image result for Bradley Hart bubble wrap gifImage result for Bradley Hart painting bubble wrap

Bradley Hart’s ‘pointillist’ bubble wrap injections to create realistic painting method.

Michael Iveson at The Averard Hotel

 

 

 

 

 

 

 

 

 

 

Michael Iveson builds bubble-wrap corridor inside The Averard Hotel in London. The effect of natural light coming through the varying surfaces of the POPed and unPOPed bubble wrap.

These works focused on the irreversible nature of human touch on the bubble wraps to produce a lasting visual imprint (in the case of Bradley Hart’s painting) and a spacial atmosphere (in the case of Michael Iveson’s site specific installation).

We believe that this same irreversible quality can take on a twist and help us to create a visual record of sounds produced.

Interaction (describe how people will interact with it, cover many scenarios):
Part 1
  • Participants will receive the device and some simple instructions.
  • They can choose to POP their bubble wraps in any way they wish:
  • Systematic /  Random /  One-by-One /  Area of Effect /  Till Completion /  Give up Halfways?
  • Participants will focus on the touch of the material.

Part 2

  • Participants will listen to the sounds they created.
  • They will get the copy of their music in material and recorded form.

Implementation

Handheld Interactive Device(s).

Technologies
  • Recording device.
  • MAX MSP

or

  • Arduino + Processing.
Milestones with a weekly breakdown of tasks to accomplish.
Monday, 24th September
·         Pitch Proposal
Monday, 3rd October
  • Idea Refinement
Monday, 10th October
  •  Sketching or Modelling of the Protoypes
Monday, 15th October  Low Fidelity Prototype

  • A cardboard version of the device
  • Simple MAX MSP sketch
  • User Testing and Feedback
Monday, 24 October
  • Prototype Refinement
Monday, 5th November  Refined Prototype

  • Actual Materials
  • MAX MSP working system
  • User Testing and Feedback
Monday, 7 November
  • Device Construction
  • MAX MSP debugging
Monday , 12 November Project Refinement

  • Final User Testing
  • Initial Documentations
Week of Nov 19th. Final Submission
Description of the first prototype.
  • A cardboard version of the device
  • Simple MAX MSP sketch
  • User Testing and Feedback
How will you work be shown at the end of the semester?

We will provide our participants with the devices and give them simple instructions to use them.

The participants will then possibly receive the bubble wrap they POPped.

How will you document your work? mainly the interaction?

We will document through the physical material and video recordings.

DOW 2 – MYO Armband

MYO Armband

created by Stephen Lake, Matthew Bailey and Aaron Grant

 

the one armband to rule them all.

This is a really cool concept where you can get connect and control to all your devices at home or at work with gestures

It is this super futuristic looking armband that you can wear onto your master arm. And it will track the rotation of your arm, the direction you’re moving it, etc. There are five distinct gestures you can learn and customize to the operations you wish to carry out with you devices.

You also get haptic feedback in the form of short, medium and long vibrations to allow you to know when the various tasks or settings are confirmed.

 

From what I can see, MYO is a great device as:

  1. It allows less distractions when operating the paired device as there is no screen, buttons, etc
  2. Natural, intuitive operation as we naturally use hand gestures in our everyday activities
  3. Looks comfortable to wear

However, some issues that may arise:

  1. It will have to depend on what apps, device functions can be connected to this. But ofcourse, the MYO team are also seeking other developers contributions in submitting their applications for integration to the expanding possibilities to connect the armband with.
  2. What happens when an abrupt gesture sends the wrong signal? In the event someone accidentally hits you while you’re operating your expensive drone and it crashes into the tree because of your unexpected gesture.

 

This is definitely a future for The Internet of Things as we can integrate the way we naturally interact with hand gestures into the operations of virtually any device. I certainly dig the no-frills concept. It really looks like having the Force in you, when I saw the guy directing his drone in their video.

I would be really interested though if this technology could perhaps be further developed for the deaf or dumb, as I can see possible application to enhance sign language communication with others who do not actually know sign language.

Or to use it to direct a robotic orchestra? Where you need all these really complicated sequences of gestures and arm motions. I would be very curious to see how far complex the gestural detection capabilities can go.

Sketch – Table Top Radio (Viena, Hanna & Fabian)

Our group thought that playing a game of scrabble would’ve been a great activity for the table top radio. Imagine a world where songs on the radio are based on the first instance of words placed on a scrabble board.

So it goes like this:1. Open the board to activate the Music!

2. Pick out some letter to start tuning to a channel.

3. Form words with the letters by placing them on the board.

4.  A song will start playing based on the keyword you have produced on the board.

(perhaps: Yellow = Yellow by Coldplay; Diamond = Shine on You Crazy Diamond by Pink Floyd; Sunday = Sunday Morning by Velvet Underground )

5. The volume will be determined from the number of points

(i.e. Yellow = 10pts; Diamond = 11pts; Sunday = 10pts)

 

some thoughts:

It was quite interesting to imagine how this would all play out as a real device. I was more fascinated by visualizing how some players would play to win the game (by forming the best words to win by points) and how other players would play to get the songs they wanted to hear and how some others might even just be playing to annoy everyone else in the room with the sheer amplitude of the music selected at random.

Another intriguing part of our discussion was really how some lyrics, these key words, phrases, have defined music for each of us, depending of course on our personal tastes. But this thing really does happen when a single word can trigger a tune to play at the back of your head, as I have seemed to show with these 3 examples (which are of course my own immediate correlation to the three words laid out).

DOW 1 – Avatar Therapy for Schizophrenia

Image result for interactive device for schizophrenia

Avatar therapy is possibly a new experimental treatment for people suffering from Schizophrenia.

Just to breakdown the Inputs-Processes-Outputs of such a device:

Sensing:

  • The participant’s interactions (i.e. gestural, head movements, body positions, etc)
  • The participant’s voice as he or she will be discussing their thoughts and feelings with the avatar

Computing:

  • The behaviors of the person as compared to the avatar in the programme
  • Collection of data and voice recordings for both the Healthcare Professionals and the Participant

Feedback:

  • The avatar answers to the person and prompts with questions to keep the interaction moving on
  • Participants can later on, replay those messages they have made, to give themselves the positive encouragement

Avatar therapy has shown through practical studies that the hallucinations the participants experienced became less frequent and less distressing. This might work better than the typical group counselling sessions that usually constitutes the treatment process.

Schizophrenia is a form of chronic psychosis, so to treat cognitive impairments, it might actually be better to have self-reinforcement which is I think is what the very personal and 1-1 avatar treatment can provide. It gives the participants the opportunity to contribute heavily to the input of the “discussion”.  I think this really does help with allowing the participant to regain control over the voices they hear during hallucinations. The digital avatar becomes this sort of test, slowing easing them back into controlling the conversations, so that they become very aware that what they say comes from their self and attribute it less and less to an imagined entity.

However such devices still come with its limitations. The study was done in an institution that is renowned for its treatment for  psychosis. Thus the device and virtual content created for the avatar therapy was backed by the innovations of highly experienced therapists.

 

Should you be interested, please find the latest on these studies here:

https://www.thelancet.com/journals/lanpsy/article/PIIS2215-0366(17)30427-3/fulltext

 

Portrayals of schizophrenia have been rather widespread in film and culture, even becoming sort of a cinematographic aesthetic.

Image result for a beautiful mind

Like in Ron Howard’s A Beautiful Mind  (2001) a biopic for the brilliant mathematician John Nash and how he and the people around him deal with the delusions.

Related image

Or more recently with Noah Hawley’s Legion (2017 – now) a TV series on FX network which portrays David Haller who is a schizophrenic mutant in the X-men marvel universe.

 

However in the real world, it is not quite as fascinating as these movies. I think that it is a very real condition and I certainly hope that the various advances in recent medical technology can help to alleviate the suffering from this rare illness.

Sketch >>> Instagram-Eyes-Mug-Kitchen (Fabian & Zhou Yang)

Deconstructions of concept given to us:
Kitchen

 Mug

                                                     Eyes

Instagram


Main concept:

The Device is a lens that allows you to see the stories of others.

like on Instagram, you can now instantly consume and connect with these 
stories of others

and here are a couple of ideas stemming from this, to place the interaction
in a kitchen, using the mug as the object of interaction,


Idea 1

Function of the app used: Insta-stories as a consumption of other users'
 life or experiences they choose to share on the net.

Metaphor: Mugs/ glasses/ cups as a vessel to contain, view and finally 
consume the beverage and the insta-stories at the same time.

The lens super-imposes the stories you want to see based on your 
beverage of choice.

cat videos X espresso shot

Perhaps as quickly down the throat as you'd just
ordered it? Just needed that fix. The daily dose of the everyday.

entertainment videos X beer mug

Or now enjoying it slow and chill, now chugging it down fast? The hype 
and all the crazy things you wished you could do.

travel videos X wine glass

And maybe having the relaxing sip and taking time to explore the 
world through someone-else's eyes? 

How does someone-else's insta-stories taste like? 
The user chooses the way to consume them.

Idea 2

Function of the app used: Insta-stories as a way to connect with strangers.
Granting strangers access to your private accounts.

Metaphor: The clinking of mugs/ glasses/ cups, the universally accepted
way of saying  "Cheers !" and hence celebrating a conversation or interaction
over the consumption of beverages in a social setting.

For this idea, we are thinking of a social event similar to sort of 
Venetian Mask Party,

step 1 where people would start off the interaction by saying "Cheers !" 

step 2 When they clink their mugs together, their lenses immediately grants access 
to the person they have chose to connect with.

step 3 They can now see the stories the other person has pre-uploaded and 
at any time,
they may choose to sip on their drink with would mean that they give a like to the 
stories shared.

step 4 and so finally they can choose to end the connection by closing their eyes 
and turning away to find a new person to connect with.


Image result for cheersRelated imageImage result for sip on drink cartoonRelated image



So, that are the couple of very simple sketch ideas we had for 
the second class exercise.

And what was interesting to learn is that we can start by ascribing meaning
to objects and working concepts from there. Deconstructing already existing 
design models and reconstructing them to find associations and 
patterns of interaction in a new situation or 
with new objects.