Interactive Devices DOW-Senses: Livio AI

Standard

 

Starkey’s Livio AI is not only a hearing aid that provides superior hearing quality, but also a health device that tracks a person’s physical activity. It is a multi-purpose device that helps people with their sense of hearing while also helping them improve on other aspects of their daily lives. Livio AI uses a technology which Starkey calls “Hearable Reality”, and this technology ensures that the user still can hear speech clearly even in noisy environments. The devices suppresses background noises and enhances human speech to ensure ease of communication.

Research has also shown that hearing health is directly related to overall well-being, because a person who is able hear can better engage with their environment. Livio AI is designed to manage both hearing health and physical health. Sensors inside the hearing
aids detect movement, activities and different gestures. This means that on top of amplifying the sense of hearing, the hearing aid is able to assess what is happening in the environment and then act accordingly with the appropriate settings, which improves user experience.

Besides the device itself, there is also an app that pairs up with the device. This provides another way the user can interact with their device. They can link the Thrive app to their hearing aid and then the app will display data that has been received by the hearing aid. For example, the device can hear words from different languages and then translate it into what the user can understand. There is also speech-to-text recognition, and the transcribe will be displayed in the app.

One pro of this device is that it uses AI technology, which means that it is learning from the user’s behaviour and inputs. Each device will increasingly become more personalised to the user and having a device that learns from you and applies what it learnt from you, onto you, will give you the feeling of being involved with device. It increases the interaction the user has with the device itself.

Another pro is that when used together with the Thrive app, there is a lot more interaction involved. The device collects the data and the app displays the data in a fun way of which the user can use to keep track of their data easily. The app doesn’t simple display the raw data as it is. It categorises it into “Body Score” and “Brain Score”, which will then add up to the total “Thrive Wellness Score” and this score, out of 200, will help the user see how healthy they are in both aspects of hearing health and physical health. The app provides insights on how the data is used and then gives the user a more detailed explanation for what each number means. This is a plus to the user experience because the clarity of data makes it easier for the user to understand how the device is helping them in their daily life.

The con of this device would be the cost because I would consider this a more premium product that is not accessible to everyone who needs it. With the amount of features included in the hearing aid, the cost will definitely be an issue because of the number of different sensors needed for the different functions.


References:

https://www.starkey.com/hearing-aids/livio-artificial-intelligence-hearing-aids#pane-hearing

Starkey Livio Ai Hearing Aid Review by Doctor Cliff, AuD

 

Interactive Devices DOW-IOT: Sen.se Mother

Standard

Sen.se Mother, the device that is apparently “like a mum, but better”. Sen.se Mother is a tracker that can keep tabs on many aspects of your life. It helps the user to keep their house organised, while also taking care of the user themselves. And what’s better is that the user can choose what they want their Sen.se Mother to control. Sen.se Mother is essentially a wireless hub that comes with 4 small tracking devices that are colour-coded for the user to attach to the respective appliances. These tracking devices contain motion and temperature sensors which collects data and sends to the user’s phone app where they can control everything. The tracking devices communicate with the main Mother hub and display all sorts of useful information on the related web and mobile applications that are paired with the Mother.

 

The pro of this device is its high adaptability. The user has the freedom to choose what they want to track, and they can put the sensors wherever they want on their appliances. This means that it can be attached to any kind of appliance so it is highly unlikely that there will be an appliance that the tracker can’t be attached to. The different colours on the tracking device is also very useful in helping the user easily match the tracker to the data displayed in the app because the colour scheme of the data showed in the app will correspond to the colour of the tracker on the appliance (instead of a plain white tracker).

The statistics shown are also pretty in-depth for a sensor so small and compact.

However, a con is that the app can only display the data when the user is within 65 feet of the main Mother hub, though the tracker is still tracking the respective aspects. This means that the user cannot freely access their data and are limited to a small range around the main hub. For a hefty price of $300, Sen.se Mother is  definitely on the pricier side and is only suitable for people who really want to micromanage every aspect of their life.

 


References:

https://culanth.org/fieldsights/the-sense-mother

https://www.cnet.com/reviews/sense-mother-review/

 

Interactive Devices DOW-Health: Underwater Sweat Sensor

Standard

sweat patchA sweat sensor that looks similar to a mosquito repellant patch has been developed by John Rogers, Ph.D, an engineering professor and the chairman of the board of directors at Epicore Biosystems, which is a company that specialises in flexible skin devices that monitor fluids. What I find interesting about this sensor is that it makes use of colour change in certain chemical reactions to give a very visual feedback for the user. A lot of times, devices show users information via numbers and statistics, which can get too overwhelming. This sensor helps the user to decipher in a very simple way of colours.

How the device works via chemistry and indicators:

This sweat sensor can help athletes monitor how much fluids they are losing so that they can replenish accordingly and maintain their best performance.

Although there are other variants to measure sweat levels in athletes as they exercise and engage in sports, they were no ways for swimmers to keep track of their sweat levels even though they are also considered athletes. One pro of this particular sweat sensor is that it is waterproof so that swimmers can also keep track of their hydration level and in turn, their overall performance. Sweat loss is related to electrolyte loss and John Rogers says, “Dehydration affects performance and can lead to cramping in the pool, but you don’t have any idea how much water you need.”. However, one con of this device is that there is no way for it to connect to a mobile app per se, so that means there is no long term tracking included with the device, and that users have to keep track on their own if they want to make use of the data collected from the device in the long run. However, research is still ongoing and they are looking into setting up a mobile app to be paired with this device to make long term statistics tracking a lot easier.


References:

https://www.wired.com/story/an-underwater-skin-sensor-lets-swimmers-track-their-sweat/?mbid=social_tw_sci&utm_brand=wired&utm_campaign=wiredscience&utm_medium=social&utm_social-type=owned&utm_source=twitter

https://www.docwirenews.com/docwire-pick/future-of-medicine-picks/wearable-sweat-sensor-informs-athletes-of-water-and-electrolyte-loss/

https://www.inverse.com/article/52755-waterproof-wearable-sweat-sensor

Interactive Devices: Multimodal Sketch (Arduino)

Standard

Project done by: Emma, Wei Lin, Natalie, Wan Hui

Our multimodal project is called Feel My Message. This project aims to inform a person about the content of a received telegram message without being disruptive to his entourage.

 

CONTEXT:

Our device is ideally an unobtrusive small box that can be placed on the table or in one’s pocket. We envisioned it to be used when one is unable to check their phones or laptops for messages during a meeting. However, in our prototype, our device is bigger than what we thought due to hardware and financial constraints.

 

HOW THE DEVICE WORKS:

The device will first receive messages sent to the user from someone else via our telegram bot. Then, two separate dials within the device will rotate to indicate who the sender is and the content of their message. This information is depicted through tactile symbols that the user will be able to touch with a finger and recognize upon contact. To make this device universal, the symbols that represent the message content is changeable to better suit the needs of different users. Users can also assign their own contacts to the dial that represents the sender of the message.

 

PICTURES OF THE UNCOMBINED PROTOTYPE:

For the symbols, we had prior prototypes before we decided on these symbols. We used plastic poly pellets to make the symbols because it was more forgiving when it came to the experimental stage because we could easily remould it, as we were still trying out which symbols worked best for what kind of message.

Before we settled on using a mould, we tried to shape the poly pellets into the symbol itself, but we realised that the symbols didn’t feel very obvious because it was too smooth to the touch. By using the mould, the finish was rougher and that was more obvious, hence we settled with using a mould.

 

DEMO:

HARDWARE:

  • Arduino Uno Microcontroller
  • ESP32 wifi and bluetooth board
  • 2 x 360° continuous servo motors (MG90S + MG995)
  • Vibration motor
  • Breadboard

SOFTWARE:

  • Arduino IDE
  • Telegram Bot Maker

 

SETTING UP THE TELEGRAM BOT:

 

CHALLENGES:

In the beginning, we tried using an ESP8266 wifi module with Arduino Uno, with the software Blynk to Arduino IDE. However we could not get the ESP8266 wifi module to work despite numerous attempts. It would not connect to the wifi or work with Blynk.

Afterwards, we bought an ESP32 wifi and bluetooth board. We tried having two separate codes on the ESP32 and Arduino Uno. We used the Arduino Uno to control the 2 servo motors (5V) and ESP32 to run the main code. We had to do this because the ESP32 runs on 3.3V while the servos run on 5V, so the ESP32 board did not have enough power to run the servos and the vibration motor. But by doing so, we were unable to get the Arduino Uno to communicate with the ESP32 via serial communication, because they were running on different baud rate (9600 vs 115200) and if we uploaded the code via the Arduino Uno, it would be unable to access the ESP32 library.

We attempted to connect the servos to the 5V or 3.3V pin on the ESP32. We were able to receive the messages, but yielded a weird error on the serial port, and the servos were unable to move. We researched that we needed a 5V to 3.3V logic leveler. However, we did not have a logic leveler available to us, so we had to think of an alternative.

In the end, our solution was to combine both our telegram bot code and servo motor codes into one main code, upload onto the ESP32 and use it as our main micro controller unit, then power the servos with a separate power source. We utilised the Arduino Uno as the 5V power supply through a breadboard. For this to work, the ESP32, as well as our other hardware had to all be grounded on the same line.

(how the wires are connected)

Now, there was sufficient power for the servo motors, we then moved on to combine the codes, making sure that the servos were working properly based on the received messages. Here is our final combined code.

 

FINAL COMBINED CODE:

#include <ESP32Servo.h>

#include <WiFi.h>
#include <WiFiClientSecure.h>
#include <UniversalTelegramBot.h>

static const int outerservoPin = 16;
static const int innerservoPin = 17;

Servo innerServo;
Servo outerServo;

int vMotorPin = 18;
bool ended = false;
String text, from_name;
int person = 0;
int msgsent = 0;


// Initialize Wifi connection to the router
char ssid[] = "XXXX"; // your network SSID (name)
char password[] = "XXXXXXXXXXXXXXXXXX"; // your network key

// Initialize Telegram BOT
#define BOTtoken "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

WiFiClientSecure client;
UniversalTelegramBot bot(BOTtoken, client);

int Bot_mtbs = 1000; //mean time between scan messages
long Bot_lasttime; //last time messages' scan has been done
bool Start = false;

void handleNewMessages(int numNewMessages) {
// Serial.println("handleNewMessages");
// Serial.println(String(numNewMessages));

for (int i=0; i<numNewMessages; i++) {
String chat_id = String(bot.messages[i].chat_id);
String text = bot.messages[i].text;

String from_name = bot.messages[i].from_name;

if (from_name == "Wan"){
person = 1;
}
if (from_name == "Emma"){
person = 2;
}
if (from_name == "Wei Lin"){
person = 3;
}

if (text == "/love") {
bot.sendChatAction(chat_id, "typing");
delay(100);
bot.sendMessage(chat_id, "Received love!");
Serial.println("lof from " + from_name);
msgsent = 1;
}

if (text == "/happy") {
bot.sendChatAction(chat_id, "typing");
delay(200);
bot.sendMessage(chat_id, "Smile! :D");
Serial.println("happy from " + from_name);
msgsent = 2;
}

if (text == "/angry") {
bot.sendChatAction(chat_id, "typing");
delay(200);
bot.sendMessage(chat_id, "Anger sent!! >:(");
Serial.println("angry from " + from_name);
msgsent = 3;
}

if (text == "/emergency") {
bot.sendChatAction(chat_id, "typing");
delay(200);
bot.sendMessage(chat_id, "Called for help!!");
Serial.println("emergency from " + from_name);
msgsent = 4;
}

if (text == "/home") {
bot.sendChatAction(chat_id, "typing");
delay(200);
bot.sendMessage(chat_id, "Welcome Home!!");
Serial.println("home from " + from_name);
msgsent = 5;
}

if (text == "/start") {
String welcome = "Welcome to Feel My Message, " + from_name + ".\n";
welcome += "Here is a list of commands for you to use :D\n\n";
welcome += "/love - send love 💕 \n/happy - send happiness ☺ \n/angry - send anger 😡 \n/emergency - send sos 🆘 \n/home - send I’m at home 🏠";
bot.sendMessage(chat_id, welcome);
}
}
}


void setup() {
pinMode(vMotorPin, OUTPUT);
innerServo.attach(innerservoPin);
outerServo.attach(outerservoPin);

Serial.begin(115200);

// Attempt to connect to Wifi network:
Serial.print("Connecting Wifi: ");
Serial.println(ssid);

// Set WiFi to station mode and disconnect from an AP if it was Previously
// connected
WiFi.mode(WIFI_STA);
WiFi.begin(ssid, password);

while (WiFi.status() != WL_CONNECTED) {
Serial.print(".");
delay(500);
}

Serial.println("");
Serial.println("WiFi connected");
Serial.print("IP address: ");
Serial.println(WiFi.localIP());

}

void stopturning(){
innerServo.write(93);
}

void vibrate(){
delay(50);
digitalWrite(vMotorPin, HIGH);
delay(500);
digitalWrite(vMotorPin, LOW);
}

void msg1(){
innerServo.write(70);
delay(640);
innerServo.write(93);
vibrate();
delay(5000);
innerServo.write(110);
delay(750);
stopturning();
}

void msg2(){
innerServo.write(70);
delay(1000);
innerServo.write(93);
vibrate();
delay(5000);
innerServo.write(110);
delay(1150);
stopturning();
}

void msg3(){
innerServo.write(70);
delay(1550);
innerServo.write(93);
vibrate();
delay(5000);
innerServo.write(110);
delay(1700);
stopturning();
}

void msg4(){
innerServo.write(110);
delay(1280);
innerServo.write(93);
vibrate();
delay(5000);
innerServo.write(70);
delay(1010);
stopturning();
}

void msg5(){
innerServo.write(110);
delay(750);
innerServo.write(93);
vibrate();
delay(5000);
innerServo.write(70);
delay(620);
stopturning();
}

void sendmsg(){
if(msgsent == 1){
if (ended == false){
Serial.println("msg1");
msg1();
msgsent = 0;
}
}

if(msgsent == 2){
if (ended == false){
Serial.println("msg2");
msg2();
msgsent = 0;
}
}

if(msgsent == 3){
if (ended == false){
Serial.println("msg3");
msg3();
msgsent = 0;
}
}

if(msgsent == 4){
if (ended == false){
Serial.println("msg4");
msg4();
msgsent = 0;
}
}

if(msgsent == 5){
if (ended == false){
Serial.println("msg5");
msg5();
msgsent = 0;
}
}
}

void per1(){
}

void per2(){
outerServo.write(180);
delay(270);
outerServo.write(93);
}

void per2back(){
outerServo.write(0);
delay(270);
outerServo.write(93);
}

void per3(){
outerServo.write(0);
delay(250);
outerServo.write(93);
}

void per3back(){
outerServo.write(180);
delay(260);
outerServo.write(93);
}


void loop() { 
if (millis() > Bot_lasttime + Bot_mtbs) {
int numNewMessages = bot.getUpdates(bot.last_message_received + 1);

while(numNewMessages) {
// Serial.println("got response");
handleNewMessages(numNewMessages);
numNewMessages = bot.getUpdates(bot.last_message_received + 1);
}

Bot_lasttime = millis();
}
if(person == 1){
if (ended == false){
Serial.println("per1");
per1();
sendmsg();
person = 99;
}
}

if(person == 2){
if (ended == false){
Serial.println("per2");
per2();
person = 99;
sendmsg();
per2back();
}
}

if(person == 3){
if (ended == false){
Serial.println("per3");
per3();
person = 99;
sendmsg();
per3back();
}
}
}

Download our code here.

Interactive Devices: LED Room Sketch (Arduino)

Standard

Project done by: Sylvia, Daryl, Wei Lin, Wan Hui

Initial ideation: sketch and prototyping

After coming up with the sketches and putting them in Processing, we were then tasked with connecting it to Arduino. We settled with 4 main functions with different triggers.

  • On/Off – Proximity Sensor
  • Brightness – Volume (Microphone level)
  • Colour Change – Compass + Gravity
  • Preset Colour Patterns – Touch 2D + Wekinator

Video Demo

On/Off

We wanted the on/off function to be the most straightforward way, which was for the LED lights to turn on when the phone is faced up (i.e. proximity sensor not covered, giving a value of “false”/”0”), and then for it to off when you place the phone faced down on a surface (i.e. proximity sensor covered, giving a value of “true”/”1”).

 

Brightness

One interesting feature we found on the iOS version of ZIGSIM was “Mic Level”, so we wanted to experiment with it. We figured volume could work well with brightness because it was quite intuitive for the LED to be more intense as the environment gets louder. Additionally, we tested this feature with music, and the results turned out surprisingly well!

 

Colour Change

Essentially, we wanted to use the compass to draw a circle, using the gestures to change the colours while “colouring” the circle. We made use of the Gravity, Quaternion and Compass sensors from ZigSim, linking it through Processing to Arduino. From Processing, the data received from ZigSim is communicated to Arduino and translated to colour codes and number patterning for the Adafruit LED Strip.

 

Preset Colour Patterns

We learnt how to use Wekinator in the previous lesson, so we thought that it would be a good chance for us to apply it in our project. We coded various LED light strip patterns in Arduino, and the trigger for each of the different patterns would be the different touch gestures that we trained in Wekinator. We had 3 gestures to trigger 3 different patterns.

 

Other variations that we tried and tested

  • Saturation – Gravity
  • Colour Wheel – Touch 2D
  • Pattern Change – Accelerometer (Shake)

Saturation

Saturation defines the brilliance and intensity of a color. White and black (grey) are added to a colour to reduce its saturation. Hence, we worked in greyscale (black to white), with the highest point being white and the lowest point being black.

 

Colour Wheel

Using the full dimensions of the screen to map to the colours of the RGB colour palette.

 

Pattern Change

Using accelerometer, if the value reads more than 8, or less than -8, then the device will register the motion as one shake (i.e. “shaketrue”) and the next pattern will play. The shake cycles through the preset patterns.

Click here for our codes.

Interactive Devices: Social Distancing Project (Analog)

Standard

“Face-to-face” done by Nasya and Wan Hui.

“Face-to-face” is an analog device that allows us to interact in a time of social distancing. Social distancing helps limit opportunities of healthy people coming into close contact with sick people, which reduces opportunities for disease transmission.

With “Face-to-Face”, it forces you and your friend to stay at least 1 metre apart, in order for both of you to be able to fully see each other’s face in the mirror. At the same time, this allows you to still keep the “closeness” of a face to face conversation as you will still be in the same physical space, while also minimising the spread of transmission due to not directly facing each other.

Sketches (Device and how it works):

Structure of the device:

Prototype: made with cardboard, masking tape, reflective paper, and wooden sticks

Video demonstration: Interactive Devices: Social Distancing Project (Video Demo)

IM2: Guest Lecture Reflection (Automated Utopia)

Standard

The most interesting part of the guest lecture was when he showed us a snippet of the Korean film “Doomsday Book – The Heavenly Creature”. I found the concept of a robot god very interesting. In the film, the robot claims that it is Buddhist and have reached enlightenment, and suspected that it is broken, a robot repairman was dispatched to the monastery to “fix” it, but to no avail. I think this film depicts the relationship between us humans and robots well; it shows how we are reliant on technology, but as we progress, we are actually afraid that the advancements in technology will overwhelm us. Not just regular programmed robots, AI-powered ones are the ones who “pose a threat” to our existence. In “Robots, Rights and Religion”, a scholarly paper written by James F. McGrath, he said that:

“Because if machines could think, if they could be persons, then they would quickly evolve to be so far superior to biological organisms in intelligence and strength that they would take over. It is not surprising that some have breathed a sigh of relief in response to the failure of real artificial intelligence to materialize as predicted in so much science fiction.”

This statement suggests that we can coexist with “thoughtless” machines, but machines added with the “self-thinking feature”, we are scared of allowing it to evolve beyond us. Ironic how we humans are the ones who created them, yet we are afraid of our own creation.

However, some Japanese thought otherwise, and are so accepting of the idea of incorporating AI into religion that they built a robot priest to bless worshippers.

Mindar, the new android priest at Kodaiji temple in  Japan

The robot priest, Mindar, is currently not AI-powered yet, but the creators said that they do intend to give it machine-learning capabilities in the future. The temple’s chief steward, Tensho Goto said, “This robot will never die; it will just keep updating itself and evolving. With AI, we hope it will grow in wisdom to help people overcome even the most difficult troubles. It’s changing Buddhism.” This sparked a thought, if religion is the belief in a superhuman being, does AI have the capacity to become this superhuman being?It is technically immortal compared to us humans, all it needs is maintenance. If AI can even take over the role of a god, then where is our place on Earth when AI becomes the norm? A utopia is an imagined community or society that possesses highly desirable or nearly perfect qualities for its citizens. But with AI threatening our existence, could it really be considered a utopia?

 

References:

https://www.vox.com/future-perfect/2019/9/9/20851753/ai-religion-robot-priest-mindar-buddhism-christianity

https://digitalcommons.butler.edu/cgi/viewcontent.cgi?article=1198&context=facsch_papers

IM2: Reading Assignment

Standard

A Companion to Digital Art by Christine Paul – Aesthetics of Digital Art

From this reading, I realised that the aesthetics of digital art is different from just aesthetics itself. Aesthetics, by the definition from the Oxford Dictionary, is the branch of philosophy that studies the principles of beauty, especially in art. However, aesthetics in digital art is way more than just beauty. “Aesthetics” in the context of digital art becomes more of a theory rather than a concept because of the several mathematical approaches that it takes (take for instance, numerical aesthetics, which talks about using several variables to form relationships/formulas that can determine the aesthetics).

One chapter in the book talks about “Computational Aesthetics”. The authors M. Beatrice Fazi and Matthew Fuller stated that:

Digital art, however, builds upon and works through the computational, sharing its limits and potentials while also inheriting conceptual histories and contexts of practice. For this reason, we contend that an aesthetics of digital art is, at a fundamental level, a computational aesthetics.

I agree with their thesis. As technology is being incorporated into art, aesthetics becomes more than just about the visual elements, as compared to fine arts where you can only judge based on the visual elements because that’s the purpose of fine art pieces such as paintings or sculptures. When deciding whether a digital art piece is aesthetic, I think it is  important to look at the process and the method of how the digital artwork is being made to determine its aesthetic value. For digital art, I think it is essential that the role of the computer is recognised as part of the work’s meaning. Paul Crowther more or less agrees with the same view as he mentioned in his paper ‘The Aesthetics of Digital Art’ that, “The aesthetics of electronic or digital artwork hinges, to a large extent, on non-visual aspects such as narrativity, processuality, performativity, generativity, interactivity, or machinic qualities.”

Similar to how Dieter Rams came up with 10 principles to determine a “good design”, Fazi and Fuller proposed 10 aspects of “computation aesthetics”, which can be used as general benchmarks to determine if the computational structure used in a digital artwork is aesthetic. It is stated that “If aesthetics can be understood as a theory of how experience is constructed, then this list attempts to account for some of the modalities of the computational that partake in such constructions.” The 10 aspects are as follow:

  1. Abstraction and concreteness
  2. Universality
  3. Discreteness
  4. Axiomatics
  5. Numbers
  6. Limits
  7. Speeds
  8. Scale
  9. Logical Equilvalence
  10. Memory

I think that by having these criteria is useful in evaluating aesthetic value of digital art. These ensure that there is an objective standard to the way digital artworks are perceived.

I also particularly like this definition of digital art in the book:

Digital art, however, is potentially time‐based, dynamic, and non‐linear: even if a project is not interactive in the sense that it requires direct engagement, the viewer may look at a visualization driven by real‐time data flow from the Internet that will never repeat itself, or a database‐driven project that continuously reconfigures itself over time. A viewer who spends only a minute or two with a digital artwork might see only one configuration of an essentially non‐linear project. The context and logic of a particular sequence may remain unclear.

I think this is an important aspect to digital art, particularly interactive art. The possibility of various outcomes from a single art piece is fascinating, and this makes it “aesthetic”.

Books for reference:

http://about.mouchette.org/wp-content/uploads/2018/02/Christiane_Paul_A_Companion_to_Digital_Artb-ok.org_.pdf

https://www.academia.edu/37948527/The_Aesthetics_of_Digital_Art.pdf

 

IM2: Inspiring Example of Interactive Art + Reflection

Standard

THE UNILEVER SERIES: CARSTEN HÖLLER

This series of slides is done by Carsten Höller, a scientist turned artist. A lot of his works are inspired by human relationships and the social context.

Carsten Höller - The Slide at ArcelorMittal Orbit Tower, 2016 London

(This slide goes around the structure 12 times, offering panoramic views of London’s cityscape.)

‘A slide is a sculptural work with a pragmatic aspect, a sculpture that you can travel inside. However, it would be a mistake to think that you have to use the slide to make sense of it. looking at the work from the outside is a different but equally valid experience, just as one might contemplate the endless column by Constantin Brancusi from 1938. From an architectural and practical perspective, the slides are one of the building’s means of transporting people, equivalent to the escalators, elevators or stairs. slides deliver people quickly, safely and elegantly to their destinations, they’re inexpensive to construct and energy-efficient. They’re also a device for experiencing an emotional state that is a unique condition somewhere between delight and madness.” – Carsten Höller

 

Pictures of various other slides at the different locations:

Carsten Höller - Isomeric Slides, 2015, Hayward Gallery, London

(Isomeric Slides, 2015, Hayward Gallery, London)

Carsten Höller – Test Site, 2006, Turbine Hall, Tate Modern, London

(Test Site, 2006, Turbine Hall, Tate Modern, London)

Carsten Höller - Vitra Slide Tower, Weil am Rhein, Germany, photo Wladyslaw Sojka

(Vitra Slide Tower, Weil am Rhein, Germany
Photo: Wladyslaw Sojka/archdaily.com)

 

My thoughts:

Fun. The first word that comes to mind when I see his works. However, that’s not what it’s all about. I really like this series because despite it’s simplicity, the artist made various connections to how slides may affect human relationships, emotions and experience as they slide down. The artist’s first thought about how slides could be used as an amazing mode of transportation, but yet it is unusual for it to be used as such, which then inspired him to challenge the use of the slide. He mentioned that his favorite quote for describing a slide is from a French writer by the name of Roger Caillois: He speaks of vertigo as being “a kind of voluptuous panic upon an otherwise lucid mind.” I agree that this statement really captures the essence of the slide because with a slide, you can see how it curves and goes all around, and though you know exactly how the journey would go, you will still get a sense of excitement as you go down the slide. I also like how the faces of the people can be captured at the end of the ride, and from most of the pictures and videos, you can clearly see genuine smiles on the users’ faces, which is what I found the most valuable about this series; which is that it can trigger the emotion of happiness, as short-lived as it may be. The slide allows users to let go and forget about their troubles, albeit for a brief moment. This work taught me how an artwork doesn’t have to be really fanciful for it to be considered art, and that for interactive art, the main point is for the users to enjoy the experience, more than anything else.

 

References:

https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-carsten-holler-test-site

https://www.theguardian.com/artanddesign/2015/may/17/carsten-holler-travel-down-a-slide-without-smiling-decision

https://gagosian.com/quarterly/2016/07/08/carsten-hollers-arcelormittal-orbit-slide/

https://gagosian.com/artists/carsten-holler/

https://www.newmuseum.org/exhibitions/view/carsten-hoeller-experience