Human 2.0

What is Human 2.0?

Image taken from https://www.scientificamerican.com/article/human-2-0-tech-upgrades-for-the-nervous-system-cartoon/

Our affinity with technology is ever growing. As technology gets closer and closer to us, the boundaries between human and machine gets blurred. The idea of Human 2.0 is the fusing of technology within the human body as a means to augment ourselves through the use of technology in prosthesis, wearables, implants, or body alteration, bringing us a spectrum of new senses and improved function that cannot be experienced without the upgrade.

 

This idea isn’t new, as many Cyberpunk-styled animations, films, and games like Ghost In The Shell, Blade Runner, and Overwatch have already imagined scenarios of human-technology augmentation.

Hacking in “Ghost in the Shell”

Gif taken from https://giphy.com/explore/sombra

Sombra is one of the cybernetically enhanced characters in Overwatch that has augmented abilities like “hacking”

Image taken from https://vocal.media/geeks/blade-runner-2049-has-a-villain-problem-and-we-need-to-talk-about-it

Joker (of suicide squad), except he isn’t trying too hard and sees in drone in Blade Runner 2049

These characters with augmented abilities can be categorised under cyborgs, which are people who are augmented to have capabilities higher than human through cybernetics enhancements. Cyborgs are a vision of Human 2.0 which is very popular in science fiction especially in the Cyberpunk genre — and it’s becoming a reality. We are now at the beginning of the transhuman era where we will all evolve alongside with technology as we fuse with it. This begun from analog augmentations like the spectacles and simple prosthetics like wooden legs. Now, we have various technology that allow us to surpass our physical limitations like controlling a prosthetic arm with our mind  and the Third Thumb prosthetic

 

Case Study

Image taken from https://en.wikipedia.org/wiki/Neil_Harbisson

Neil Harbisson is the first person to have implanted an antenna on his head which allow him to translate colour into sound by converting light into a frequency which Neil is able to hear as a note. This was done to combat his achromatopsia which is a condition that allows him to only see in greys. With this augmentation, he is able to use technology to help him see more than what he previously lack.

I don’t feel like I’m using technology, or wearing technology. I feel like I am technology. I don’t think of my antenna as a device – it’s a body part.” He wears it to bed and in the shower.

He identify colours by first memorising and learning them. After getting used to the tones, he then start to subconsciously perceive the information. Finally, when he started to dream in colours, he felt that he officially became whole with the device, and that is when the brain and software united, turning him into a cyborg.

In this video, he explains to the audience on how his experience with the world have changed. He began to dress in terms of how it sounds rather on how it looks, and eat in arrangements of his favourite tune. He is also able to now hear colours as he is able to associate the tones with the colours that he perceive.

Harbisson’s visualisation of Beethoven’s Für Elise. Image taken from https://www.theguardian.com/artanddesign/2014/may/06/neil-harbisson-worlds-first-cyborg-artist

He further upgraded that he can hear infrared and ultraviolet, as well as allow for bluetooth and wifi connection in his ‘eyeborg’.

Neil’s ability to process the new senses is possible due to “neuroplasticity”, which are ways that the brain is able to rewire itself as it tries to interpret information through a stimuli. Essentially, our senses are all the same, in which they take in an input and convert it to information for the brain to process which allows us to perceive.

How does the Eyeborg work?

Image taken from https://alchetron.com/Eyeborg
Image taken from https://alchetron.com/Eyeborg

The eyeborg receives lightwaves through the camera in front, which then converts the information into sound pitch from the implant within his head. The pitch is then sent to his eardrums through bone conduction.

Harbinsson’s Eyeborg have been exhibited in Singapore Art Science Museum before. The sensor is connected via wifi to allow him to sense the colours that are detected in front of the sensor in the exhibition

Image taken from https://amanz.my/2017138797/

Side note, but In one of the article, I find it funny that Neil have mentioned about something which my Bart-I device deals with:

What next for cyborgism? “We’ll start with really simple things, like having a third ear on the back of our heads. Or we could have a small vibrator with an infrared detector built into our heads to detect if there’s a presence behind us.” Like a car’s reversing sensor? “Yes. Isn’t it strange we have given this sense to a car, but not to ourselves?”

“Precisely” – Me

Analysis / Critique

I think the eyeborg is only the beginning of what human upgrades can be. It is super interesting to be able to perceive something that we normally cannot. It’s like trying to imagine a colour that we have never seen before. I’m having some trouble thinking and writing this as it is hard to try to imagine what’s next in terms of the senses one can use to perceive the world.

Still, it’s good that there’s proof that humans and technology can unite as one, as long as the human have enough exposure to the technology such that it becomes an intuitive and incorporated tool they use, like an extended organ. There is still many possibilities in future for this.

However, I think it’s still a very primitive way for us to perceive. Most of these extended senses require the sacrifice or use of some other senses. For example, the eyeborg requires the user to have a sense of hearing in order for it to be used. The 3rd thumb also requires the use of big toes. These means of controlling or using the ‘new’ senses is still not perfect and could be more incorporated into the brain such that it can be directly sensed or directly controlled.

Dreams…

One thing I find really interesting about Neil is how he dreams in his new sense. This is very familiar to me after thinking about it for a while as I had dreams about games where the game mechanics and everything works as it is in the actual game. These mechanics in my dreams are also vivid which really emulates how I will play the game in real life. What I’m trying to say here is, by Neil’s definition, we are all already cyborgs!

Ethical?

There is also a question of ethics. Is it ethical to allow people to customise themselves such that they can sense new senses? Challenges that define human life can be nullified. How will we define humans then? Will we like what the future brings? Will it cause more discrimination or segregation?

I think this is not something we should worry as we will eventually adapt and like it whether or not as we’re now speaking in our perspective as non-transhumans. If something is beneficial to us as a whole, it will eventually become widely accepted.

Intrusiveness

Another con is how intrusive the eyeborg is. As it is directly connected to the brain, there are many ways the eyeborg can cause harm. If his device is hacked, he can be sent massive amount of information that can overload his head with stimuli. It can also have the potential of causing infections.

Altered ways

Theres so many ways. It’s rather about what is necessary. For Neil, it’s to fix his condition. For us, what would be more important? An extension to our sight may only be a nuisance for us as there are more things we need to take into account and we need to learn how to use it.

But objectively, alternate ways to use the eyeborg could be a 360 degree vision which allow us to see behind and beside us. That will be really good and convenient for us.

The next step

Perhaps instead of an existing sense to replace the previous sense, it’s to create a whole new sensation. If we are able to link the tech directly into the brain and create a new sensation for the extended sense, that will be cool. It’s just like how our sense of balance is something we don’t really think about sensing but our body just do. If this new sense is able to sense incoming weather, then we will just have a ‘feeling’ and ‘know’ that it’s about to rain and keep our clothes. There are so many ways!!!

Links

https://www.forbes.com/sites/cognitiveworld/2018/10/01/human-2-0-is-coming-faster-than-you-think-will-you-evolve-with-the-times/#200d67ef4284

https://www.theguardian.com/science/2019/jul/13/brain-implant-restores-partial-vision-to-blind-people

https://www.theguardian.com/artanddesign/2014/may/06/neil-harbisson-worlds-first-cyborg-artist

https://interestingengineering.com/the-transhuman-revolution-what-is-it-and-how-we-can-prepare-for-its-arrival

Chromunicator

CHROMUNICATOR

Colour-to-text converter

How the device works

This device is an analog machine that provides full tactile and visual feedback through the use of dials, a button, a toggle switch, an LED strip, and computer screen. The device ‘converts’ coloured light into text. Users can turn the dial to alter the colours’ red, green, and blue values manually. After selecting a colour they like, they are to press the red button to ‘save’ the colour. The machine can save up to 10 colours. The users can then send the message to the computer by pushing the toggle switch down. This results in the machine entering ‘transmit’ mode, and promptly repeats the selected colours on its screen. Meanwhile, the ‘translation’ happens in real time, displaying the translated words on the computer screen. Users can then press the red button again to repeat the transmission. Users can then flick the switch back up to return to ‘record’ mode.

In a nutshell:

In record mode (toggle switch up)

  • dials: control RGB values on the LED
  • button: record (up to 10) colours

In transmit mode (toggle switch down)

  • dials: does nothing
  • button: repeat transmission

*Note: the words at the toggle switch and buttons do not mean what they mean now as the concept is changed. Switch ‘up’ should mean “input” and switch ‘down’ should mean “transmit”, and the red button should be “record”

Presentation

Thanks Fizah for helping me take these SUPER HIGH RES PICS!!! And Shah for playing with the model!

Why this device?

In the context of a trade show, I would use this device as an attractive ‘toy’ to attract people to come to the stall. The manual control of LED allow visitors to test the strip’s potential while also having fun.

Personally…

I wanted to make this as I was really excited to do this when I had this idea. I also wanted to improve my workflow and the way I treat school projects. I was also deeply inspired by the group last year that did the Light Viola, and also Zi Feng in the way he document and work on his assignments. After this project, I found that I have a real interest in making physically active and immersive experiences, things people can play around and fiddle with.

Process

I started with simple sketches to illustrate my idea. I had a few different ideas. The one that stuck with me was the current setup with a different rule: the device remembers an input every 0.3seconds, and the button sends all the words together in a string.

The sketch developed to this in class:

After the sketch and some consultation with Galina and Jeffrey, I went around finding scrap materials. Before this, I also went on a trip with Shah, Joey and Elizabeth to look for components and bought an LED strip (forgot what type it is), 3 knobs, 6 variable resistors, a button and a toggle switch. So with these, I pretty much knew what I wanted to do.

I went on to make the model. I found a large matt sheet of acrylic and decided that it was a good material for my casing. I lasercut it after making measurements to ensure that every component can be installed in. I then bent it using hot gun and a table (unfortunately there is no documentation of that). After that, I went to spray paint the case and the other parts using the left over spray paints I had last few sems when I was in Product Design.

For the case, I spray painted from the back side so the front remains matte and the paint won’t be scratched. The bottom piece was sprayed black in the end to fit the aesthetics better. The other parts are sprayed in their respective colours.

This is how it looked like after installation. YAY!

So after I was satisfied with the cover, I went on to work on the code. It was a long process of editing, adding, testing.

All my previous files. I save them separately with each big edit in case I needed to return to the previous one

The first variant of my code is like my original idea, which works like this:

  • switch is on
  • input from the dials are recorded every 0.3s
  • each individual inputs will form words
  • the words form a sentence
  • press button to send the message to the computer to be displayed

With this variant, I found that it keeps repeating the words, and that the user has to turn the knob quickly enough to create an interesting mix of words. It is too restricted in general and as such I changed to the new idea which allow users to take their time to select the colours they want and send the message as and when they wish.

The basis of my final code is:

  • Fast LED libraries which runs the LED strip

Here are some examples of how fast LED works (CRGBPalette16, CRGB)

  • Timers

Timers helped me a lot in controlling the variables. The code above helps me constrain the variable ‘valuenew’ (which controls what word will be printed) during the ‘transmit’ stage. How my timers work is, for instance:

int timer 1000;

int t = 10;

timer -= t;

if(timer <= 0){

timer = 1000;

}

In this example, timer will minus 10 every tick, until it becomes 0, then it will reset to 1000 and repeat. If I want to control a variable, I can make something happen when timer <= 0.

How ‘valuenew’ controls the words printed
  • Modes

The above screenshot is an example of a mode. Each variant of ‘memory’ stores a set of instructions, and these set of instructions will play when the mode is changed.

Other ways I used mode is as such:

int buttonpress = digitalWrite(button,HIGH);

if(buttonpress = 1){

mode = 1

}

if (mode == 1){

Serial.println(“something should happen here”);

}

In this example, if button is pressed, the mode will switch to 1, which causes the serial monitor to print “something should happen here”. It’s a very versatile way of coding although it can be very clunky in a complex code.

In Processing

The code in Processing is very simple. It’s basically receiving inputs from the serial monitor and displaying it on the screen as texts. There is only 1 problem: if there is nothing in the serial monitor, it will print ‘null’ which causes an error “null pointer exception” which crashes the program. What I did to counter it is to only run the program if there is no ‘null’s.


Physical Model Part 2

I continued with my model while working on the code, as a way of me taking a break from constantly pulling my hairs out with codes. I went on with using foam to cover up the rest of the form as it is a versatile material for my oddly-shaped case. It also fits the aesthetics. I also used a white corrugated plastic sheet as a screen as I wanted a diffused look.

I forgot to mention wiring. So I soldered the wires onto the components and learnt the best way to solder through Zi Feng. It was amazingly easy after he shown us the magic!

Wiring isn’t that complicated as it looks. It’s just because I used the same wire for every component. The casing really helps a lot to keep things organised.

This video was recorded in my Instagram story last weekend, when it was pretty much finalised.


Photo Documentations