Month: October 2018

DOW 3 – Senses (Blind Smell Stick)

Peter de Cupere Blind Smelling Stick & a Scent Telescope (2012), in Rio de Janeiro.

  • Blind Smell Stick has a tiny bulb on the end with holes and smell detectors.
  • Scents reach your nose through the tube, helped by some mini ventilators, heating, and filters.
  • The dark glasses cancel out your sense of sight and you focus on smelling and finding your way through touching with the stick.

Image result for blind smell stick

Sensory replacement –  Sight >>> Smell (Sound/Touch)

Adding another sense to the focus of the blind man’s stick

“It can help blind people to find their way, or to prevent that they walk in a shit (they can smell it before).”

 

I felt that this piece created a new experience simply by bringing a less used sense to the fore of daily navigation. This places the user in an interesting dynamic as we are making a decision on where to walk or turn away from, based on what we smell.

Peter de Cupere presents a different take on the experience of losing one’s sight.

 

 

Special mention:

Derek Jarman Blue (1993).

Film that is not visual, but about the failing of the visual sense.

 

Presentation: Machine Learning

slides:

https://drive.google.com/open?id=1afXfHpiUGAPREHSEK8B3sks6LEo8AsT2sSCntXPlQk0

 

Principals

  • A field of AI that employs data collection to identify patterns and aid decision making
  • System / machine improves over time

Case Study – Voice Assistants

Goal: Imitate human conversation

Process: Learn to ‘understand’ the nuances and semantics of our language

Actions: Compose appropriate responses / execute orders

For example, Siri can identify the trigger phrase ‘Hey Siri’ under almost any condition through the use of probability distributions.

In the battle of Xiao Ai versus Siri, it was realized that due to the machine learning specific  to the cultural locality, Xiao Ai could function way better for the mainland China consumer. It knew how to send money to the user’s contacts on WeChat, whereas Siri only send a text message. It could also accurately find the user’s photos from an outing on a previous weekend with friends to upload to social media.

Case Study – Self Driving Cars

Goal: Imitate human driving

Process: Identify vehicles, humans and objects on the road

Actions: Make decisions for the movement of the vehicle based on scenarios presented

Waymo was a company started by Google. Machine Learning is showing much advancement in the cars ability to analyze sensor data to identify traffic signals, actors and objects on the road. This is allowing the car to better anticipate behavior of others. Hence they are getting closer to a real human driving experience powered by this machinery.

Waymo has started in their autonomous taxi service in Chandler, Arizona.

Future implications

  1. Internet of things: Enhanced personalization

Machine learning personalization algorithms will be able to build data about individuals and make appropriate predictions for their interests and behavior. For example, an algorithm can deduce from a person’s browsing activity on an online streaming website and recommend movies and tv series that will interest the person to watch. Currently, the predictions may be rather inaccurate and result in annoyance.

However, they will be improved on and lead to far more beneficial and successful experiences. Also, with unsupervised algorithms, it will be possible to discover patterns and complex data that supervised methods would not be able to. Without any direct human intervention, this will result in faster and more accurate machine learning predictions.

 

2. Rise of Robotic Services

The final goal of machine learning is really to create Robots. Machine learning makes possible robot vision, self-supervised learning, and multi-agent learning, etc.

We have seen the Henn na – or Weird Hotel in Japan where Robots are providing the entire service for all the tourists who stay there.

Robots will, one day, help to make our lives simpler and more efficient.

Conclusion

Machine learning is a really promising technology. If we can harness it for the good of humanity, this could drive great change in our quality of life.

 

 

 

Hyperessay #1 proposal: “Magic Show” (Zhou Yang and Fabian Kang)

We are interested to broadcast a street magic, close-up magic event.

We will be walking around ADM to perform a series of tricks.

We will invite as many spectators that we can get for this event.

We hope to let everyone witness something spectacular!
“现在到了鉴证奇迹的时候!”

 

Obviously we have not much of a talent for real magic. So what we will be bringing is magic with a twist. This is somewhat inspired by the actually talented duo, Penn and Teller. Although they’re wonderfully gifted with actual skills for magic, they frequently create shows that subvert our expectations of the usual “a magician never reveals his tricks” as well as to perform tricks that critique their own practice.

The main aim for our Hyperessay is is for audience in the First Space to actually see the gag executed in its entirety but for audience in the Third Space who are bounded by the lens of the broadcasting device to possibly see the magic for real.

This is also stemming from Prof. Randell Parker’s The Third Space (2014) where he speaks of how “the fusion of physical and remote” creates this “pervasiveness of distributed space”. Hence we are interested to see how audiences in the First and Third space view the illusion event and to what degree is the suspension of disbelief going to work out in these spaces?

We are also very interested in making this social broadcasting event in a similar vein as Videofreex to attempt at calling people to the spaces we are engaged in. To start conversations. And to show other stories running alongside, in simultaneity to the framed Third Space.

And of course, not forgetting, some inspirations from this guy:

 

Micro Project 5: Bought this New Game

 

This event was live-screened in ADM’s game lab. I was showcasing the game-play for a new exploratory game that I just bought on Steam, and nothing much happens in the game, until …

 

*Reflection / Spoiler Alert* After an hour of being unable to resolve the Facebook Live split screen issue, Zhou Yang and I set on a rule-breaking adventure to connect the First and Third Spaces and even the Real-world and the Fourth-wall. We learnt later on that it was just a matter of downloading the right plugin or something like that. We were thankful, though, that Facebook Live made us so exhausted because we decided to do it in one-take without rehearsals. Zhou Yang plays games, unlike me, so he did a wonderful improv in the walk-through. The part I really like was when our cinematographer, Win Zaw, came into the frame at the very end.

We realise earlier on that the viewers will have to situate Zhou Yang and I within these spaces and that they would want to place me in the first space and Zhou Yang in the third, despite the fact we both are in the third space. So we had to clearly figure it out for ourselves before we could proceed with executing the performance. We wanted to speak about the convergence of all the worlds at the very end, hence the swapping of positions of Zhou Yang’s body and my body from our initial first/third spaces, and of course, the classroom full of the livebroadcast and the projector.

Our takeaway from this is that experimentation can only happen in film and performance when close to nothing is scripted and when ideas are acted upon and realized from accidental incidents. We also reflected on the locality of the first and the third space in relation to our being and came to some sort of an agreement that it really is up to the content provider, the participant, the viewer to perceive and decide their relationship with these spaces. They are all at once with geographical separation yet without boundaries, existing in simultaneity yet having, even if ever so slightly, different time frames. Schrodinger’s debate ensues. When encountering any first/third space conundrum, it is therefore important to situate oneself.