Skip to toolbar

AuthorFabby K

Bubble-Down!

Fabian Kang and Zhou Yang, Bubble-Down!, 2018, laser-cut medium density fibreboard, sound-sensors, LED lights, powered by Arduino.

 

Concept

The idea is for players to use a sheet of bubble wrap to play the game as it was this touch and feel that we wanted to be integral to the interaction. Bubble wrap also becomes this expendable medium that has to be ‘refilled’ after each game.

Bubble wrap works like a physical button, having the haptic feedback of the contained air being squeezed and emission of a pop sound when it then bursts.

Bubble wrap is usually something that people press without much thought to it. And indeed it can be somewhat addictively mindless. Hence, we were wondering what if each press of the bubbles has to be considered very very carefully? With Bubble-Down! (2018), we invite players to battle it out in a minesweeper-meets-battleships game of suspense, interacting with bubble wrap in an unconventional way.

 

Gameplay

  1. Players will plant their bombs without the knowledge of their opponents. They should take note of those positions they have rigged up.
  2. Players will swap places.
  3. The game commences with players having 5 ‘lives’ each.
  4. Players will take turns to pop the bubble wraps. It is mandatory to pop once upon each turn. Should a light go on, the player loses a ‘life’.
  5. The game continues till either player has exhausted his or her 5 ‘lives’.

 

Documentation

Fabian Kang and Zhou Yang, Bubble-Down! Documentation, 2018.

Above is a short video to showcase the process underwent by our team from ideation to execution.

 

Design 

The main challenges faced was understanding how a sound sensor works and what inputs is Arduino recognizing.  We realized that although we tried getting an analogue input, Arduino was picking the bubble wrap ‘pop’ sound consistently as a value of 1023 (analogue inputs having only a maximum of 1024 values). This meant that the analogue inputs were no different from a binary digital input. It was either recognizing the reading as a sound or as silence.

Hence, much of the design was highly focused on the contact areas of the bubble wrap and the medium-density fiberboard (MDF). We had to ensure a layer of separation between the cubes that had ‘bombs’ in them or those that were without. MDF was the chosen material as it was able to allow the sounds to travel to the sound sensor. We realized that the sensor had to be directly in contact with the board surface, hence we improvised a solution involving clothspegs that will secure them as desired.

 

These are some process of the Design:

 

The design solution was to separate Part A from Part B which in turn had to all be separated from the table surface as well. This was done by the small pieces of Parts C that would be stacked to create stilts for the bases of the aforementioned parts. We also calculated the exact length of the ‘bombs’ so that that stick would cause contact from Part A to Part B.

 

 

 

 

 

 

        Without 'bomb'                      With 'bomb'

 

The key takeaway from this Final Project is that working with something like a sound sensor that is very depended on the environment and the interactive situation, one certainly needs to be wary of the efforts to adjust to the calibration requirements of the hardware. We did realise the immense difficulty at some point, but decided to push ahead simply because we liked the whole process loop of things. From haptic touch with the bubble wrap, popping sounds produced by it, being picked up by the sound sensors.  It felt right to endeavor towards this and stay true to the concept we had agreed upon.

 

 

Programming

This was the basic circuitry we were working with:

And we of course rigged it up to include 10 sound sensors and 10 lights, as well as to ensure the inputs from the sound sensors are individually directed to the outputs of the corresponding lights.

After we did that, we realized that it would be cumbersome to have the players move the actual sound sensors around the board, hence the design had to allow for the sensors to be static, which was when we incorporated the idea of the ‘bombs’ to be moved around instead of the sensors.

 

Extras

Lastly, it is always certainly about the process, like in these last timelapse videos we would like to share:

With that, I think we shall look forward to a fruitful second half of FYP year!

 

DOW 3 – Senses (Blind Smell Stick)

Peter de Cupere Blind Smelling Stick & a Scent Telescope (2012), in Rio de Janeiro.

  • Blind Smell Stick has a tiny bulb on the end with holes and smell detectors.
  • Scents reach your nose through the tube, helped by some mini ventilators, heating, and filters.
  • The dark glasses cancel out your sense of sight and you focus on smelling and finding your way through touching with the stick.

Image result for blind smell stick

Sensory replacement –  Sight >>> Smell (Sound/Touch)

Adding another sense to the focus of the blind man’s stick

“It can help blind people to find their way, or to prevent that they walk in a shit (they can smell it before).”

 

I felt that this piece created a new experience simply by bringing a less used sense to the fore of daily navigation. This places the user in an interesting dynamic as we are making a decision on where to walk or turn away from, based on what we smell.

Peter de Cupere presents a different take on the experience of losing one’s sight.

 

 

Special mention:

Derek Jarman Blue (1993).

Film that is not visual, but about the failing of the visual sense.

 

Presentation: Machine Learning

slides:

https://drive.google.com/open?id=1afXfHpiUGAPREHSEK8B3sks6LEo8AsT2sSCntXPlQk0

 

Principals

  • A field of AI that employs data collection to identify patterns and aid decision making
  • System / machine improves over time

Case Study – Voice Assistants

Goal: Imitate human conversation

Process: Learn to ‘understand’ the nuances and semantics of our language

Actions: Compose appropriate responses / execute orders

For example, Siri can identify the trigger phrase ‘Hey Siri’ under almost any condition through the use of probability distributions.

In the battle of Xiao Ai versus Siri, it was realized that due to the machine learning specific  to the cultural locality, Xiao Ai could function way better for the mainland China consumer. It knew how to send money to the user’s contacts on WeChat, whereas Siri only send a text message. It could also accurately find the user’s photos from an outing on a previous weekend with friends to upload to social media.

Case Study – Self Driving Cars

Goal: Imitate human driving

Process: Identify vehicles, humans and objects on the road

Actions: Make decisions for the movement of the vehicle based on scenarios presented

Waymo was a company started by Google. Machine Learning is showing much advancement in the cars ability to analyze sensor data to identify traffic signals, actors and objects on the road. This is allowing the car to better anticipate behavior of others. Hence they are getting closer to a real human driving experience powered by this machinery.

Waymo has started in their autonomous taxi service in Chandler, Arizona.

Future implications

  1. Internet of things: Enhanced personalization

Machine learning personalization algorithms will be able to build data about individuals and make appropriate predictions for their interests and behavior. For example, an algorithm can deduce from a person’s browsing activity on an online streaming website and recommend movies and tv series that will interest the person to watch. Currently, the predictions may be rather inaccurate and result in annoyance.

However, they will be improved on and lead to far more beneficial and successful experiences. Also, with unsupervised algorithms, it will be possible to discover patterns and complex data that supervised methods would not be able to. Without any direct human intervention, this will result in faster and more accurate machine learning predictions.

 

2. Rise of Robotic Services

The final goal of machine learning is really to create Robots. Machine learning makes possible robot vision, self-supervised learning, and multi-agent learning, etc.

We have seen the Henn na – or Weird Hotel in Japan where Robots are providing the entire service for all the tourists who stay there.

Robots will, one day, help to make our lives simpler and more efficient.

Conclusion

Machine learning is a really promising technology. If we can harness it for the good of humanity, this could drive great change in our quality of life.

 

 

 

Hyperessay #1 proposal: “Magic Show” (Zhou Yang and Fabian Kang)

We are interested to broadcast a street magic, close-up magic event.

We will be walking around ADM to perform a series of tricks.

We will invite as many spectators that we can get for this event.

We hope to let everyone witness something spectacular!
“现在到了鉴证奇迹的时候!”

 

Obviously we have not much of a talent for real magic. So what we will be bringing is magic with a twist. This is somewhat inspired by the actually talented duo, Penn and Teller. Although they’re wonderfully gifted with actual skills for magic, they frequently create shows that subvert our expectations of the usual “a magician never reveals his tricks” as well as to perform tricks that critique their own practice.

The main aim for our Hyperessay is is for audience in the First Space to actually see the gag executed in its entirety but for audience in the Third Space who are bounded by the lens of the broadcasting device to possibly see the magic for real.

This is also stemming from Prof. Randell Parker’s The Third Space (2014) where he speaks of how “the fusion of physical and remote” creates this “pervasiveness of distributed space”. Hence we are interested to see how audiences in the First and Third space view the illusion event and to what degree is the suspension of disbelief going to work out in these spaces?

We are also very interested in making this social broadcasting event in a similar vein as Videofreex to attempt at calling people to the spaces we are engaged in. To start conversations. And to show other stories running alongside, in simultaneity to the framed Third Space.

And of course, not forgetting, some inspirations from this guy:

 

Mikroprojekt sechs ‘Glitch’ – Wasserfall im Bahnhof

Hold Middle Mouse. 

Drag DOWN. Drag UP. Drag DOWN. 

Drag Up. And down, again. 

Stop. 

Repeat.

 

‘Ausstieg in Fahrtrichtung rechts!’

Ich höre nichts.

In der Station ist ein Wasserfall.

 

“Uber mein Leben:

Die Zeit ist mein Schmerz. So dann,

brauche ich ein Rettungsfloß”

Micro Project 5: Bought this New Game

 

This event was live-screened in ADM’s game lab. I was showcasing the game-play for a new exploratory game that I just bought on Steam, and nothing much happens in the game, until …

 

*Reflection / Spoiler Alert* After an hour of being unable to resolve the Facebook Live split screen issue, Zhou Yang and I set on a rule-breaking adventure to connect the First and Third Spaces and even the Real-world and the Fourth-wall. We learnt later on that it was just a matter of downloading the right plugin or something like that. We were thankful, though, that Facebook Live made us so exhausted because we decided to do it in one-take without rehearsals. Zhou Yang plays games, unlike me, so he did a wonderful improv in the walk-through. The part I really like was when our cinematographer, Win Zaw, came into the frame at the very end.

We realise earlier on that the viewers will have to situate Zhou Yang and I within these spaces and that they would want to place me in the first space and Zhou Yang in the third, despite the fact we both are in the third space. So we had to clearly figure it out for ourselves before we could proceed with executing the performance. We wanted to speak about the convergence of all the worlds at the very end, hence the swapping of positions of Zhou Yang’s body and my body from our initial first/third spaces, and of course, the classroom full of the livebroadcast and the projector.

Our takeaway from this is that experimentation can only happen in film and performance when close to nothing is scripted and when ideas are acted upon and realized from accidental incidents. We also reflected on the locality of the first and the third space in relation to our being and came to some sort of an agreement that it really is up to the content provider, the participant, the viewer to perceive and decide their relationship with these spaces. They are all at once with geographical separation yet without boundaries, existing in simultaneity yet having, even if ever so slightly, different time frames. Schrodinger’s debate ensues. When encountering any first/third space conundrum, it is therefore important to situate oneself.

POP Noise

Concept
  • Bubble wrap is like the simplest of buttons with haptic feedback.
  • The popping of the bubbles is irreversible.
  • Could we extend the act of POPping the bubbles – like how the record disk is left with a permanent imprint caused by the sounds recorded on them.

Artistic realm  Design realm
  • We are interested in the production of sound and the haptic feedback of bubble wrap.
  • We want to create a work that will be performative in nature.
Similar works, critique and differences / Inspiration sources

Image result for Bradley Hart bubble wrap gifImage result for Bradley Hart painting bubble wrap

Bradley Hart’s ‘pointillist’ bubble wrap injections to create realistic painting method.

Michael Iveson at The Averard Hotel

 

 

 

 

 

 

 

 

 

 

Michael Iveson builds bubble-wrap corridor inside The Averard Hotel in London. The effect of natural light coming through the varying surfaces of the POPed and unPOPed bubble wrap.

These works focused on the irreversible nature of human touch on the bubble wraps to produce a lasting visual imprint (in the case of Bradley Hart’s painting) and a spacial atmosphere (in the case of Michael Iveson’s site specific installation).

We believe that this same irreversible quality can take on a twist and help us to create a visual record of sounds produced.

Interaction (describe how people will interact with it, cover many scenarios):
Part 1
  • Participants will receive the device and some simple instructions.
  • They can choose to POP their bubble wraps in any way they wish:
  • Systematic /  Random /  One-by-One /  Area of Effect /  Till Completion /  Give up Halfways?
  • Participants will focus on the touch of the material.

Part 2

  • Participants will listen to the sounds they created.
  • They will get the copy of their music in material and recorded form.

Implementation

Handheld Interactive Device(s).

Technologies
  • Recording device.
  • MAX MSP

or

  • Arduino + Processing.
Milestones with a weekly breakdown of tasks to accomplish.
Monday, 24th September
·         Pitch Proposal
Monday, 3rd October
  • Idea Refinement
Monday, 10th October
  •  Sketching or Modelling of the Protoypes
Monday, 15th October  Low Fidelity Prototype

  • A cardboard version of the device
  • Simple MAX MSP sketch
  • User Testing and Feedback
Monday, 24 October
  • Prototype Refinement
Monday, 5th November  Refined Prototype

  • Actual Materials
  • MAX MSP working system
  • User Testing and Feedback
Monday, 7 November
  • Device Construction
  • MAX MSP debugging
Monday , 12 November Project Refinement

  • Final User Testing
  • Initial Documentations
Week of Nov 19th. Final Submission
Description of the first prototype.
  • A cardboard version of the device
  • Simple MAX MSP sketch
  • User Testing and Feedback
How will you work be shown at the end of the semester?

We will provide our participants with the devices and give them simple instructions to use them.

The participants will then possibly receive the bubble wrap they POPped.

How will you document your work? mainly the interaction?

We will document through the physical material and video recordings.

Pirate Broadcasting II – Boy & Bee

I was just re-watching some Paul Thomas Anderson films when the little critter came buzzing in. After failing to chase it out of my room, I decided now might be a good time to do the Pirate Broadcasting mini project. So I tried to reach out to my friends to watch the events that would soon unfold.

This was very spontaneous. I had no clue what I’d do exactly. So I had some background video playing on my laptop.  Blasted my music. And tried to interact with those who were commenting online.

The music blasted in the background is by Gun. They are a late 60s band and part of UK’s heavy metal scene. Very short-lived, only 2 albums, but really accomplished. Image result for the sad saga of the boy and the bee

I really like The Boy and The Bee from this album as through a simple narrative of The Boy and The Bee getting embroiled in a conflict, so to speak of the duality of life and death. It is really poetic and the arrangements in this track are so riveting.

I was always fascinated with how a bee’s sting is really its ultimatum. It leaves not only its menacing stinger, but also a good half of its abdomen as well. Ripping itself apart to exact the deadliest attack it can onto an aggressor. And, yes, you can go into anaphylactic shock from a bee sting if you are allergic.

This second iteration of the broadcasting project made me feel like it is difficult to find an audience because I really do not use instant social media at all. For example, anything I present on Instagram is a rather curated version of my life. But I guess it is also the difficulty of find content I want to broadcast, because what I find cool and meaningful to perform might not be that all popular.

So I guess what I really did learnt from these 2 mini projects is that if I want to create a live stream I should perhaps put more thought and planning into it. But still, those 2 spontaneous sketches were good fun. Maybe in the future I will actually do a regular live-stream to introduce my friends to the music I listen to.

 

 

—————————————————————————————–

I did a cut for the version presented in class as it was too long.

For the full video you can find it here:

 

DOW 2 – MYO Armband

MYO Armband

created by Stephen Lake, Matthew Bailey and Aaron Grant

 

the one armband to rule them all.

This is a really cool concept where you can get connect and control to all your devices at home or at work with gestures

It is this super futuristic looking armband that you can wear onto your master arm. And it will track the rotation of your arm, the direction you’re moving it, etc. There are five distinct gestures you can learn and customize to the operations you wish to carry out with you devices.

You also get haptic feedback in the form of short, medium and long vibrations to allow you to know when the various tasks or settings are confirmed.

 

From what I can see, MYO is a great device as:

  1. It allows less distractions when operating the paired device as there is no screen, buttons, etc
  2. Natural, intuitive operation as we naturally use hand gestures in our everyday activities
  3. Looks comfortable to wear

However, some issues that may arise:

  1. It will have to depend on what apps, device functions can be connected to this. But ofcourse, the MYO team are also seeking other developers contributions in submitting their applications for integration to the expanding possibilities to connect the armband with.
  2. What happens when an abrupt gesture sends the wrong signal? In the event someone accidentally hits you while you’re operating your expensive drone and it crashes into the tree because of your unexpected gesture.

 

This is definitely a future for The Internet of Things as we can integrate the way we naturally interact with hand gestures into the operations of virtually any device. I certainly dig the no-frills concept. It really looks like having the Force in you, when I saw the guy directing his drone in their video.

I would be really interested though if this technology could perhaps be further developed for the deaf or dumb, as I can see possible application to enhance sign language communication with others who do not actually know sign language.

Or to use it to direct a robotic orchestra? Where you need all these really complicated sequences of gestures and arm motions. I would be very curious to see how far complex the gestural detection capabilities can go.

DOW 2 – Humavox Wireless Charging

Humavox

A portable wireless charging platform that utilizes near-field radio frequency waves to charge almost any device wirelessly.

 

Our current lifestyle demands us to be surrounded with an array of devices. However it means that we have to deal with cables, charging ports, etc of all sorts. So it comes as no surprise that we have constantly been seeking various wireless solutions of all kinds, with remote controllers, WIFI pairing, etc. And for wireless charging this is the continuation of the quest that Nikola Tesla  set himself upon with his “Tesla Towers” in hopes of making possible the phenomena of wireless power transfer.

Related image

 

 

The designers of Humavox looked at this 21st century problem and came up with the technology and design solution that answers conveniently and effectively to our devices need to always be juiced up.

As opposed to other current solutions What can Humavox do:

  1. Portable charging that can occur anywhere
  2. Devices can be placed inside anyhow
  3. The possibility to turn any regular container into a charging station