DoW 3: Metaphor: AudRey

Image taken from https://www.wearablemedia.co/audrey

AudRey is a garment piece that analyses its wearer’s Instagram presence and reflects it in a ‘digital aura’ that surrounds the user in augmented reality. 

The aura (3D AR particles) emitted from the dress symbolises one’s digital presence, as it uses IoT services to interprets information in the users’ Instagram feed in terms of the colours, comments, and likes.

This combination of augmented reality and fashion explores the potential of wearables in the future where the virtual world and reality meets.

Image taken from https://www.theverge.com/2018/4/14/17233430/wearable-media-fashion-tech-nyc-ceres-jumpsuit-interactive
Image taken from https://www.wearablemedia.co/audrey

The augmented reality is coded within the patterns on the garment which will reveal itself after scanning the garment with the custom made app made using Unity and Vuforia API. In the app, the patterns will appear to leave the garment and orbit around the user. The garment is made by heat transferring vinyl on neoprene textile, fastened by 3D printed fasteners. 3D textile printing technology.

Pros

  • Cool dress that you can wear to dress well with an added ‘twist’
  • Allows one to be aware about their social media presence and wear it out like a badge
  • Opens up a new layer of fashion and self expression using technology

Cons

  • Unaccessible to many if the effect is only restricted to their custom made app
  • Patterns all have to be unique in order for personalisation
  • User themselves cannot see the effect
  • Does it work on mirrors?! (Don’t think so? The patterns will be flipped)

Don’t know if pro or con

  • The inclusion of AR using a custom mobile app makes the effect non-intrusive as nobody else but the user (or people with the app) is able to see your data

Alternate uses

This technology is currently being used in museums to make paintings alive. An example will be this Kult’s Emoji Land where a mural is brought to life using AR done by Howie Kim.

Gif taken from https://www.howiekim.com/others

Imagine if we are able to do this but with the clothes, instead of having just particles flying around us. It also doesn’t need to be particles that escapes the garment, but moves, or can create the illusion of having the dress cave into one’s body. There will be endless possibilities as to what we can do.

The Future

More customisability will be good to the dress. If one is able to design their own effects, design what they want to show, or to use a database of patterns to reveal itself, that will be really useful to bring such wearable tech to the market.

If everyone had something like the Google Glasses, this will be a cool new way of looking at fashion. It will be even cooler if our clothes don’t have patterns at all and people are still able to see the AR effects by scanning the clothes we are wearing.

I think there’s a need for a default AR program on our phones (or an actual real Google Glasses) as more of such technology emerge. Facebook has already developed an AR program, Spark AR, that allow people to create their own AR filters. The next step is to make viewing AR much more easily.

Links

https://www.wearablemedia.co/audrey

https://www.theverge.com/2018/4/14/17233430/wearable-media-fashion-tech-nyc-ceres-jumpsuit-interactive

https://www.howiekim.com/others

Resource dump for Devices

This is so I dont have 5212841203 tabs open at the same time

http://www.ia.omron.com/data_pdf/cat/ee-sy671_672_ds_e_6_3_csm486.pdf

https://github.com/jgoergen/CamBot

https://www.freetronics.com.au/blogs/news/arduino-eye-tracking#.Xa3NtZMzYWo

https://create.arduino.cc/projecthub/H0meMadeGarbage/eye-motion-tracking-using-infrared-sensor-227467

https://www.pololu.com/product/2458

https://ezbuy.sg/product/51000774099086.html?keywords=Reflectance%20Sensor&baseGpid=51000774099086&pl=2&ezspm=1.20000000.22.0.0

https://ezbuy.sg/product/168986219.html?keywords=Reflectance%20Sensor&baseGpid=168986219&pl=1&ezspm=1.20000000.22.0.0

Human 2.0

What is Human 2.0?

Image taken from https://www.scientificamerican.com/article/human-2-0-tech-upgrades-for-the-nervous-system-cartoon/

Our affinity with technology is ever growing. As technology gets closer and closer to us, the boundaries between human and machine gets blurred. The idea of Human 2.0 is the fusing of technology within the human body as a means to augment ourselves through the use of technology in prosthesis, wearables, implants, or body alteration, bringing us a spectrum of new senses and improved function that cannot be experienced without the upgrade.

 

This idea isn’t new, as many Cyberpunk-styled animations, films, and games like Ghost In The Shell, Blade Runner, and Overwatch have already imagined scenarios of human-technology augmentation.

Hacking in “Ghost in the Shell”

Gif taken from https://giphy.com/explore/sombra

Sombra is one of the cybernetically enhanced characters in Overwatch that has augmented abilities like “hacking”

Image taken from https://vocal.media/geeks/blade-runner-2049-has-a-villain-problem-and-we-need-to-talk-about-it

Joker (of suicide squad), except he isn’t trying too hard and sees in drone in Blade Runner 2049

These characters with augmented abilities can be categorised under cyborgs, which are people who are augmented to have capabilities higher than human through cybernetics enhancements. Cyborgs are a vision of Human 2.0 which is very popular in science fiction especially in the Cyberpunk genre — and it’s becoming a reality. We are now at the beginning of the transhuman era where we will all evolve alongside with technology as we fuse with it. This begun from analog augmentations like the spectacles and simple prosthetics like wooden legs. Now, we have various technology that allow us to surpass our physical limitations like controlling a prosthetic arm with our mind  and the Third Thumb prosthetic

 

Case Study

Image taken from https://en.wikipedia.org/wiki/Neil_Harbisson

Neil Harbisson is the first person to have implanted an antenna on his head which allow him to translate colour into sound by converting light into a frequency which Neil is able to hear as a note. This was done to combat his achromatopsia which is a condition that allows him to only see in greys. With this augmentation, he is able to use technology to help him see more than what he previously lack.

I don’t feel like I’m using technology, or wearing technology. I feel like I am technology. I don’t think of my antenna as a device – it’s a body part.” He wears it to bed and in the shower.

He identify colours by first memorising and learning them. After getting used to the tones, he then start to subconsciously perceive the information. Finally, when he started to dream in colours, he felt that he officially became whole with the device, and that is when the brain and software united, turning him into a cyborg.

In this video, he explains to the audience on how his experience with the world have changed. He began to dress in terms of how it sounds rather on how it looks, and eat in arrangements of his favourite tune. He is also able to now hear colours as he is able to associate the tones with the colours that he perceive.

Harbisson’s visualisation of Beethoven’s Für Elise. Image taken from https://www.theguardian.com/artanddesign/2014/may/06/neil-harbisson-worlds-first-cyborg-artist

He further upgraded that he can hear infrared and ultraviolet, as well as allow for bluetooth and wifi connection in his ‘eyeborg’.

Neil’s ability to process the new senses is possible due to “neuroplasticity”, which are ways that the brain is able to rewire itself as it tries to interpret information through a stimuli. Essentially, our senses are all the same, in which they take in an input and convert it to information for the brain to process which allows us to perceive.

How does the Eyeborg work?

Image taken from https://alchetron.com/Eyeborg
Image taken from https://alchetron.com/Eyeborg

The eyeborg receives lightwaves through the camera in front, which then converts the information into sound pitch from the implant within his head. The pitch is then sent to his eardrums through bone conduction.

Harbinsson’s Eyeborg have been exhibited in Singapore Art Science Museum before. The sensor is connected via wifi to allow him to sense the colours that are detected in front of the sensor in the exhibition

Image taken from https://amanz.my/2017138797/

Side note, but In one of the article, I find it funny that Neil have mentioned about something which my Bart-I device deals with:

What next for cyborgism? “We’ll start with really simple things, like having a third ear on the back of our heads. Or we could have a small vibrator with an infrared detector built into our heads to detect if there’s a presence behind us.” Like a car’s reversing sensor? “Yes. Isn’t it strange we have given this sense to a car, but not to ourselves?”

“Precisely” – Me

Analysis / Critique

I think the eyeborg is only the beginning of what human upgrades can be. It is super interesting to be able to perceive something that we normally cannot. It’s like trying to imagine a colour that we have never seen before. I’m having some trouble thinking and writing this as it is hard to try to imagine what’s next in terms of the senses one can use to perceive the world.

Still, it’s good that there’s proof that humans and technology can unite as one, as long as the human have enough exposure to the technology such that it becomes an intuitive and incorporated tool they use, like an extended organ. There is still many possibilities in future for this.

However, I think it’s still a very primitive way for us to perceive. Most of these extended senses require the sacrifice or use of some other senses. For example, the eyeborg requires the user to have a sense of hearing in order for it to be used. The 3rd thumb also requires the use of big toes. These means of controlling or using the ‘new’ senses is still not perfect and could be more incorporated into the brain such that it can be directly sensed or directly controlled.

Dreams…

One thing I find really interesting about Neil is how he dreams in his new sense. This is very familiar to me after thinking about it for a while as I had dreams about games where the game mechanics and everything works as it is in the actual game. These mechanics in my dreams are also vivid which really emulates how I will play the game in real life. What I’m trying to say here is, by Neil’s definition, we are all already cyborgs!

Ethical?

There is also a question of ethics. Is it ethical to allow people to customise themselves such that they can sense new senses? Challenges that define human life can be nullified. How will we define humans then? Will we like what the future brings? Will it cause more discrimination or segregation?

I think this is not something we should worry as we will eventually adapt and like it whether or not as we’re now speaking in our perspective as non-transhumans. If something is beneficial to us as a whole, it will eventually become widely accepted.

Intrusiveness

Another con is how intrusive the eyeborg is. As it is directly connected to the brain, there are many ways the eyeborg can cause harm. If his device is hacked, he can be sent massive amount of information that can overload his head with stimuli. It can also have the potential of causing infections.

Altered ways

Theres so many ways. It’s rather about what is necessary. For Neil, it’s to fix his condition. For us, what would be more important? An extension to our sight may only be a nuisance for us as there are more things we need to take into account and we need to learn how to use it.

But objectively, alternate ways to use the eyeborg could be a 360 degree vision which allow us to see behind and beside us. That will be really good and convenient for us.

The next step

Perhaps instead of an existing sense to replace the previous sense, it’s to create a whole new sensation. If we are able to link the tech directly into the brain and create a new sensation for the extended sense, that will be cool. It’s just like how our sense of balance is something we don’t really think about sensing but our body just do. If this new sense is able to sense incoming weather, then we will just have a ‘feeling’ and ‘know’ that it’s about to rain and keep our clothes. There are so many ways!!!

Links

https://www.forbes.com/sites/cognitiveworld/2018/10/01/human-2-0-is-coming-faster-than-you-think-will-you-evolve-with-the-times/#200d67ef4284

https://www.theguardian.com/science/2019/jul/13/brain-implant-restores-partial-vision-to-blind-people

https://www.theguardian.com/artanddesign/2014/may/06/neil-harbisson-worlds-first-cyborg-artist

https://interestingengineering.com/the-transhuman-revolution-what-is-it-and-how-we-can-prepare-for-its-arrival

Device of the Week 2 : Google Home

What is IoT?

Internet of Things (IoT) is “a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.”

Communication between machines (Machine-to-Machine AKA M2M) uses a network without human interaction by collecting information, then storing and managing it through a cloud network. An IoT device takes the concept of M2M, but connects people, systems, and other applications to collect and share data in one big web.

Image taken from https://internetofthingsagenda.techtarget.com/definition/Internet-of-Things-IoT

Below is a TED-ed video about IoT.

Some examples of IoT devices are:

  • smart door locks (remote control using phone, access to usage or status information for user monitoring, proximity unlocking based on user information and settings)
  • home systems (control and monitor lights, air-conditioner, water, doors, etc; personalised automation based on user information and settings)
  • Wearable devices (tracks and sends user data and alert under certain circumstances)

More here!:

18 Most Popular IoT Devices in 2019 (Only Noteworthy IoT Products)

Human-technology integration

I think IoT devices are very personalised usually and home-use is one of the most common usage. They are designed to be almost constantly interacted with, integrated into places or objects that we always use or are always exposed to. Our reliance on these devices can be more beneficial than harmful as it helps us fill in functions in our lives that improve our wellbeing and productivity. I think these devices would eventually become common to human life as technology slowly gets integrated into our bodies.

How can we design an IoT device that does more than just personalising our lives? This is one question I ask myself as I continue to think of an idea for my project… :’)

 

Google Home

Screenshot from https://store.google.com/product/google_home

In my device of the week, I’ll be looking at Google Home. Google Home is a smart speaker that has Google Assistance installed in it. It is able to connect itself with home appliances like water sprinklers, garage doors, lights, as well as smart TVs and mobile devices. The Google Home also feels like a helpful assistance in the house as the in-built Google Assistance has a good mic and speaker system that allows itself to hear commands and give replies around the house. The assistance is able to answer questions relating to one’s personal life like “what is my schedule for today”. It is also able to answer general questions like “who is the president of USA?” and it does this through connecting itself to Google search engine. Other than these functions, the assistance can also help users do a myriad of tasks, like “play PewDiePie on Youtube on my TV” or “set alarm at 3:30PM”, or simply play music. The system also have games that can be played by a group of friends, and it’s actually quite entertaining!

Screen recording from https://store.google.com/product/google_home

Some screenshots from the website for us to see its functions in a concise way:

Google Home can control many types of audios on many devices
Entertainment too!
Many other ways to customise and control your home with Google Home
Extra things that the Home provides like games!

Interface

As for interface, the Google Home is a very simple device. It only has a button, while the rest are all on the touch surface on top of the device.

Here are the interactive elements of the device:

Screenshot from https://store.google.com/product/google_home
Screenshot from https://store.google.com/product/google_home
Screenshot from https://store.google.com/product/google_home
Screenshot from https://store.google.com/product/google_home
Screenshot from https://store.google.com/product/google_home

These functions are the more ‘user-interface’ kind of functions where the device used like a household product where users have to learn how to use. Other than these functions, the Google Home’s Google Assistance responds to commands like “Okay Google” and “Hey Google”. These functions are the more humanistic and intuitive, which serves more like an assistance that talks to you through a speaker. This makes the Google Home a very unique product, as it is both a product and a service.

So, we’ve gone through its uses and its interface. What is the IoT in the Google Home?

Image taken from https://medium.com/google-developers/iot-google-assistant-f0908f354681

Google Home involves a few IoT systems that work together to form an integrated whole. The chart above shows how the system basically works. Google communicates through IoT to a cloud service that communicates back to the appliance that you wish to control. This is done using these things called “device types” and “device traits”.

“Google’s NLU engine uses types, traits, device names, and other information from the Home Graph to provide the correct intent for your IoT cloud service.”

Image taken from https://medium.com/google-developers/iot-google-assistant-f0908f354681

Device types are names given to every device to distinguish between the different devices that are connected to the cloud. In the above table, you can see that the light is named “action.devices.types.LIGHT“.

Device traits are commands that the device type is able to obey, allowing for user control. You can’t command your light to explode, but you can certainly change the brightness of your light using “action.devices.traits.Brightness“.

After that, there is the Home Graph. It is Google’s database that keeps all your devices’ contextual information. It stores the data of the devices like its name, its location, its functions, etc. This allows Google to understand which command is meant for which device. For example, you can say “Turn off the kitchen lights“, and Google is able to map out which lights belong within the kitchen, and switch it off.

Below are some examples of those IoT processes.

  • User switching on and controlling smart appliance (eg. living room air-conditioner): User input command to mic > Command translated to data to device type and device trait in cloud > Home Graph finds the path towards the appliance > switches on appliance > Appliance sends data to mobile device via same network > device can control the air-conditioner > LOOP
  • User switching on entertainment (eg. Netflix on TV): User input command to mic > Identify device type and trait > Cloud > Home Graph > Appliance receives data and switches on the entertainment
  • User using Google Assistance (eg. Asking for internet information): User input command to mic > Assistance searches the web database > Assistance replies the most relevant and reliable answer (not sure if this process uses the cloud?)

Pros:

  • Very VERY convenient
  • Intuitive and well-integrated in home and personal settings
  • Helpful in situations where one’s hand is not free, or when one needs to get things done on the go
  • Able to fit well in healthcare as its monitoring system can automate tasks in emergencies without the person even knowing first (eg. heart attack)
  • The future of home where everything is controllable without actually touching anything

Cons:

  • Needs time to set up
  • May have difficulty learning to use the device without buttons
  • Privacy issues regarding a device that collects personal data
  • Not very useful if one can just use hands to operate the different devices
  • Data may be fed into an algorithm which teaches the device to learn your behaviour (privacy)
  • The AI may take over the world

An Alternate Use

I’m thinking, most IoT devices are usually home-based and personalised which works really well as personal information are most valuable in IoT devices as it can cater itself towards the personal needs of its users. What can we do to integrate IoT in non-home settings? Perhaps this can be some kind of service AI that helps people navigate around a mall like a directory. It can also help users book slots or book tickets for certain shops. This is still very personalised, although I have transformed it into a public service.

Another way to use the device is perhaps in militaries where it is able to give real-time feedback from the sensors inside a (say F-16 Jet). As fighter jets are always under high stress, a lot of issues will happen to it. It takes a team to download data from within the jet after every day (or after every fault) to determine its flight data and where the fault is located. The use of IoT would make everything much easier such that, as soon as the plane touches down, it connects to a network that allows the new Google Home to communicate with IoT devices inside to checks for the fighter jet’s internal systems.

Conclusion

Image taken from https://www.dropbox.com/s/4ry5ugkhxik1v9u/Screenshot%202019-10-06%2003.03.51.png?dl=0

The Google Home is not just an IoT device that very nicely integrated the technology to do a myriad of things that is super beneficial to us all, but is also an AI that humanises the experience of using such a device. That gives a user-centric experience which allows it to perform its function seamlessly and intuitively. Overall, Google Home is a great example of an interactive device that fulfils everything that should exist in an interactive device.

 

Device of the Week 1 – Health

Automated External Defibrillator

Screenshot taken from https://www.dropbox.com/s/qdm6i0v40w92ijd/Screenshot%202019-09-08%2023.57.41.png?dl=0

According to this article, an AED is:

a lightweight, portable device that delivers an electric shock through the chest to the heart. The shock can potentially stop an irregular heart beat (arrhythmia) and allow a normal rhythm to resume following sudden cardiac arrest (SCA). SCA occurs when the heart malfunctions and stops beating unexpectedly. If not treated within minutes, it quickly leads to death.

How an AED work is… (here’s a video but I will also explain through text)

First, responders have to open the case to retrieve the AED, and in doing so, alerts a nearby medical response team to the location of the AED. The responder have to bring the AED to the victim and open the AED up and switch it on. The AED will guide the responder to what to do and the responder just have to follow the instructions step-by-step. There are adhesive electrodes in the AED, which, when pasted on the victim, detects the victim’s heart rate which will help the responder assess when defibrillation is needed. If a shock is required, the AED will charge up and notify the responder to press the button to deliver a shock. The shock will momentarily stun the heart, which causes it to stop for a brief moment. This helps to ‘restart‘ the heart so it will begin to beat effectively again. Afterwhich, CPR should be performed to prolong the heartbeat until medics arrive.

The AED is a really frightening yet fascinating device. It is meant to save a life, and yet, it is for public use for when emergency arise. The amount of responsibility this device give to its users is very, very great, despite how easy one is able to access and use it.

Device Breakdown

At the most basic level, the AED must be able to perform its main objective which is to defibrillate, and as such requires hardware that delivers the required electricity and a pulse detector for feedback. In order to perform its function as a public life-saving device, it needs to be portable, as user-friendly and intuitive as possible so that even the most unskilled person can attempt to use in an emergency. It must also be able to deploy quickly and teach its user as quickly as possible. For, according to this source, 7-10% chance of a person surviving cardiac arrest is lost in every minute of delay.

We can analyse how these points fare in the AEDs with reference to the image below:

Image taken from https://www.aed.com/
  1. The AED has 2 obvious buttons that stands out first, the green being the switch and the red being to deliver a shock. These buttons are easy to spot for most people that operates a modern device and as such, allows people to recognise and operate efficiently. The buttons are also very spherical which allow for a quick and easy press.
  2. The image on the AED clearly tells the user on where to place the shock pads. The pads themselves also have images on them to show how to use which allows the responders to use it quickly.
  3. There seems to be a “PULL” label tells one to pull the case open, although the directional arrows are not very strategically placed. If not for the cut in the glass case, I would have tried pulling the part surrounding the label itself. That is not good interface design. This compartment probably holds the shock pads which are taken out to display in this image.
  4. The overall form of the AED is very compact and hand-held friendly.
  5. Although not seen, the AED probably have vocal instructions that people can follow, which I think helps a lot as we want to be told what to do during a stressful situation.
  6. However, it could be better as there are no text instructions as some responders may not be able to hear clearly or are deaf. Text can be simplified to easily communicate during high stress situations, but it may affect the compact form of the AED.

Overall, the device has a good form and easy to understand interface that allows the AED to perform all of its function as effectively as possible. Even though the process puts the responders at high stress and is fairly complicated to use, its interface allows people to use it quickly and correctly, saving many lives. So I would say that it has successfully concisely packaged the entire defibrillator into a portable public tool, transferring the medic’s responsibility to the public where a member of public can help save a life just as easily as he/she operates a toilet bowl. The article below is an example of how an AED can change the lives of many.

Civilians Help Man With Heart Attack In HDB Flat; Prove AEDs Can Help Save Lives

If anything were to be improved on, it would be to add text instructions for the hearing impaired. Perhaps another point of improvement is the inclusion of a female image alongside the male one, or to have an androgynous person so people will be able to respond to female victims confidently as well. See this link to why we should not always use a male body as reference to design.

My Suggestion

I do not wish to disrespect the invention of the defibrillator but I like the idea of using a defibrillator to do the opposite of what it is designed to do. As the device is designed to prevent death, what if it causes death instead? But of course, not to humans, but to mosquitoes / flies / annoying bugs.

Imagine a portable bug killing device where you go around zapping them! Would anyone be afraid of a cockroach now? Could it be turned into a game? It will definitely not be ethical as it can be misused GREATLY so it’s probably not a good idea. Still, it will be cool to own a smaller non-lethal voltage ‘stunner’ that can stun or kill insects.

I really can’t think of anything else that a defibrillator can be modified into…