Why our screens make us less happy

Adam Alter began his TED talk with how Steve Jobs limits technology and screen time with his kids.

This made him think about what screens were doing to people.

He collected on data how much time we were spending time with our technologies. The time we spend per day with them were increasing year by year.

The weird thing is that we were actually spending more time with apps that doesn’t make us happy.

Why is that happening?

Adam Alter said that it was because there were no “stopping cues”. Back when I was much younger, in primary school, I used to watch dramas on TV. There is no such thing as binge-watching, I had to wait till the next day/week to watch the next episode. That was my stopping cue, a sign for me to go and do something else, something new.

But with the way we consume media nowadays, there is always new content rolling in from different apps. There is no stopping because we are so connected with each other. Stopping cues are non-existent, there is nothing to stop us from looking at our screens and move on to do something else.

We need to break away from technology once in awhile. We need to breathe.

Being disconnected and giving ourselves some real alone time is a very important and healthy thing to do.

 

Minimalism Show Reflection

 

MEGA DEATH – TATSUO MIYAJIMA 

The perception of human lives in the form of data.

Mega Death is a work that talks about the lifes that were lost to war and conflict in the 20th century.

Miyajima attempt to portray this circle of life through the counting down of digits symbolising death is so fast, that it’s gone before I am able grasp and ponder upon that 1 life amongst the data of millions.

Time waits for no one, death represents not the end, but simply just part of the cycle of life. In a sense, we can say that death is the birth of a new life.

 

ROOM FOR ONE COLOUR – OLAFUR ELIASSON 

The perception of the world around us.

By reducing the colour we see, we are removing “distractions” from our world of sight. This forces us to focus on the details of the things we see. For me, I was carefully observing the features of the people around me and fascinated at the forms of the veins on my palms.

“How does colour help to shape how we view the world?” This was 1 question I had in mind in the installation.

There is this theory of colour perception that we don’t actually see the same colours. A colour I label as blue might not be the same colour in another person’s eyes. These labels were just taught and programmed by us humans to tie what we are seeing to a common name we could use to communicate.

My One Demand

My One Demand is an interactive film about unrequited love. It’s an one take, continuous shot using a single camera, and comments from a live viewing audience.
The shoot is actually happening right outside the theatre simultaneously as the audience are seated inside watching the broadcast live.
Audiences were engaged and encouraged to interact live by responding to a series of questions posed by the narrator about, unrequited love.
Selected answers received will be added to the script. This makes the script unique as 1 will never be the same as another screening.
As the film comes to an end, the narrator asks the last question about what is something you wish you could change, but couldn’t? All the answers received are projected onto the screen.

MOD’s narrative is not just to tell the narratives of the characters in the film to the audience. But also to pull in snippets of the narratives from the audiences themselves.
This is especially so for the last question. As they watch the answers of what the others around them has to say, they wonder about their narratives too.

2ch – vtol // brain to brain communication

 

Dmitry Morozov is a Russian multidisciplinary artist based in Moscow. His work is based around media arts including sound and robotics.

2ch is an interactive instrument of communication between 2 people using brainwaves. The brainwaves are measured using the headset, which is an EEG (electroencephalogram) reader. It is a sensor that is used to measure electricity in our brains. Our brain cells communicate via electrical impulses and are active all the time, even when asleep.
The recorded electrical activities will be translated into sound, motion and video images.
The 2 users connected to the machine are supposed to synchronize their minds, guided by pitch, mechanical motion and visualization.

The point of modern day computer interfaces is to connect our brains to the system/computer interface. An example is a mouse and a keyboard that we can use to translate our thoughts into data that can be seen or heard. The surface acts as a bridge for us to connect with other people.
This project, however, works as a brain to brain interface, which skips the step of having this bridge by allowing the EEG headset to “read” the data (thoughts) in our minds directly and outputting them to the other user.
I feel that that communication in this sense becomes more raw and pure, because translation in any form can never be 100% accurate. Comparing to us using an interface and telling it what we are thinking and letting the interface read our minds directly, the latter will probably have a lower loss in translation.
In vtol’s words himself, “the machine is not the end point of your thoughts, but the mediator to another person”.
//
References
https://www.vice.com/en_au/article/ez53d7/cybernetic-device-turns-brainwaves-into-telepathic-art
https://www.fastcompany.com/3063203/this-pyramid-sculpture-is-an-interface-for-brain-to-brain-communication
Pictures: http://vtol.cc/filter/works/2ch

200227.5 // The Lag

Hardware

We tried connecting all 8 solenoids to the arduino board through 2 relays.

Each relay is to be powered by 1 adapter. Using 1 adapter for 2 relays will cause the relays to be switch on and off in turns instead of simultaneously.

circuit diagram

 

Software

The current code uses a random delay inside each loop zone. The program reads the code line by line, and stops the entire reading process when there is a delay. Every loop has to be completed before the program is able to read the distance information from the kinect to determine which zone to activate. The lag is induced as there is no way to constantly read the distance data and jump out without completing a full cycle of the zone loop.

 

PEN-ing ma thoughts on Principles of New Media ;)

PEN is a sound sculpture that responds to users approaching. The clicking of pens changes from a state of chaos to a calm, unified one as the user moves towards it.

sculpture blueprint

Numerical Representation

Sampling and converting data received from kinect/ultrasonic sensor into digital data is the backbone to how PEN will work. The data collected will directly translate into how the sound sculpture will react to the user. This binary translation into values to be tracked triggers different states of the scores, which would also affect the state of chaos/calmness of the sounds.

 

MODULARITY

Hardware – The pens could be swapped out and replaced by another object,changing how the entire sculpture would sound like.

Software – The scores work like different pieces of sheet music and the pens, the instruments. We could change what “songs” the pens are playing by changing or modifying the scores.

 

AUTOMATION

The ability for the sets of scores to be triggered based on pre-determined distances between the user and the sensor in the code gives the sculpture a life of its own. The code gives PEN a set of logic to be followed and actions to execute when certain requirements were met.

 

VARIABLE 

PEN is completely reliant on the interaction between the users and the sculpture. The scores the pens are clicking to will not change without the sensor detecting an object approaching.

The scores selected to be used will also be a variable due to the use of a randomiser. This would make the sounds made by the sculpture appear to be more interesting but still maintaining the same concept of chaos/calmness that we want.

 

TRANSCODING

The concept behind PEN plays on the idea of the interstice of sound. With the use of technology, we are able to emphasize the gaps between sounds by exaggerating and enlarging these sounds. Through this change of sounds between chaos and calmness in a short time by something we would normally not pay attention to, we hope to highlight the gaps/silence between sounds to the users.

200227.3 // Mini Solenoid Mockup

Hardware

sculpture blueprint
  • Solenoid gets really hot quickly, duration between clicks to be increased to prevent overheating

 

Software

Loaned out the kinect to test the depth sensor.

sample base code

Kinect outputs a double image as seen on the right. This is because there is a  physical offset between the IR camera and the projector.

https://social.msdn.microsoft.com/Forums/en-US/74ff175a-291f-445d-ab55-09d2af7cfd4c/why-did-my-kinect-sensor-show-such-a-double-image?forum=kinectsdk

Thankfully, this isn’t an issue as all we need is the depth data and not precise location information.

 

Next, we tried to colour code objects based on how far they are from the kinect.

weird glitch

The sample code we found online did not include the function to purely draw the raw depth distance data.

Look for corey for some help and…

raw depth data

200227.2 // Mid Term Project Review

 

We focused on the code and tried testing out the scores by using the relay first.

 

IMPROVEMENTS

  • Decided to change the ultrasonic sensor to kinect as the ultrasonic sensor could only detect distance linearly. Using the depth function of the kinect would increase the AoE and accuracy of incoming humans.
  • We will retrieve the depth values from the kinect in processing and hook it up to arduino to make the solenoids click.