Third-I: Final Post

Updates and process

After the last update, I bought the materials that I needed. Few days later, I got the opportunity to work with Galina to create the headpiece which I am really thankful for as it looked good and I wouldn’t have been able to make it if it weren’t for Galina! Thank you!

Quick prototyping with cloth

Moving to Neoprene
Final result!


Adding eye tracking:

The eye tracking works well, but it is unstable and needs to be constantly pressed down by my hand.

After that, I kind of left the project as it is as I had to work on other projects. The lack of time really killed me here as I have overpromised the device, even though I know that they can be done. They just cannot be done within the timeframe given.

There wasn’t any new updates to the device as, conceptually, it is already perfect to me.

The Code

There are 3 codes:

1. Eye: The eye is an ESP 8266 wifi module connected with 2 3.6V lithium Ion batteries. This gives a total voltage of 7.2V that powers the servo motor, the 2 IR sensors, and the board. I think this was still too little voltage (or there was some complications) as it wasn’t making it work very well.



Auto detection mode: the eye scans for the nearest object within its range and tracks it

Auto tracking mode basically says, if one IR sensor is detecting a lower distance, then the servo will turn to that direction.

Sweep mode: motor goes left, then right, and repeat

Sweep mode basically loops the servo left and right.

Eye tracking mode: the wearer’s eye controls the movement of the eye.

Eye tracking mode (code incomplete) is basically reacting to the value sent when the participant looks left or right.

2. Head (wifi): The wifi module on the head receives information from the eye and sends to the main arduino via Wire (I2C communication).

These lines of code basically sends the state data to the eye via Wifi, using (this does not work yet due to lack of time)

This is for receiving the info from the eye, which is transmitted to the main Arduino via Wire.

3. The head contains the eye tracker, vibration modules, and buzzer. The code is simple, if it receives a certain value, it will buzz a specific vibration module and ring a specific tune. Unfortunately, I don’t know why, all the vibration modules run at the same time. I had no time to troubleshoot.

Final Device

The device, as it stands now, is only a prototype. It is sad that I can’t get it to work on time. Although, I am still proud that I got the eye working, which is really cool as it really works wirelessly.

Further Improvements

  1. I will continue working on this project over the holidays to finish it, so I can be proud owning this device and have it in my portfolio.
  2. I would 3D print the eye part so it looks more polished and I can hide all the wires.
  3. Fix the eye as it keeps turning left for no reason (it keeps happening even at the beginning stage of testing)
  4. Fix the vibration and include the buzzer as I haven’t really tried using it.
  5. Try out having the eye tracking up, as that should honestly be the main part of this device, next to the movable third eye.
  6. Hide all the wires and make this thing looks cool
  7. If the auto eye movement still doesn’t work, I’ll just stick to eye tracking as it seems to be the one that makes the most sense for this concept for now.

Lessons and Reflections

I have really over-promised everything and that’s really bad. I’m sorry to disappoint everyone with my failure. The important lesson is to be less optimistic when it comes to coding. Many of my ambition comes from optimism that things will all go according to plan. Mostly, it doesn’t. Also, the worst part of coding is to make everything run together smoothly. When there’s a lot of parts, it becomes very complex and hard to handle.

Despite this, I really learnt a lot. I learnt many aspects to coding that make my project work. I will try to bring this knowledge further next time. After this, I’m keen to learn more about other ways to code which isn’t as clunky as it is in this project.

I also learnt that we are designers and not engineers, so my project should be more on the experience than the tech. I was too focused on the tech and trying to combine everything that I was overwhelmed. I should really take things one at a time.


So I simplified everything so everything is connected by wires now. The flaw is that there is now an ugly hose that’s connecting everything together. Still, I’m proud of this!!! And this is still just a prototype and I’m sure there’s more to improve for this project. I hope I can continue this somehow through other modules or FYP… As I’m thinking of doing something of similar theme!!!


Jeff taking a look!

below are videos of it working:

Auto tracking:

Eye tracking:

DoW 3: Metaphor: AudRey

Image taken from

AudRey is a garment piece that analyses its wearer’s Instagram presence and reflects it in a ‘digital aura’ that surrounds the user in augmented reality. 

The aura (3D AR particles) emitted from the dress symbolises one’s digital presence, as it uses IoT services to interprets information in the users’ Instagram feed in terms of the colours, comments, and likes.

This combination of augmented reality and fashion explores the potential of wearables in the future where the virtual world and reality meets.

Image taken from
Image taken from

The augmented reality is coded within the patterns on the garment which will reveal itself after scanning the garment with the custom made app made using Unity and Vuforia API. In the app, the patterns will appear to leave the garment and orbit around the user. The garment is made by heat transferring vinyl on neoprene textile, fastened by 3D printed fasteners. 3D textile printing technology.


  • Cool dress that you can wear to dress well with an added ‘twist’
  • Allows one to be aware about their social media presence and wear it out like a badge
  • Opens up a new layer of fashion and self expression using technology


  • Unaccessible to many if the effect is only restricted to their custom made app
  • Patterns all have to be unique in order for personalisation
  • User themselves cannot see the effect
  • Does it work on mirrors?! (Don’t think so? The patterns will be flipped)

Don’t know if pro or con

  • The inclusion of AR using a custom mobile app makes the effect non-intrusive as nobody else but the user (or people with the app) is able to see your data

Alternate uses

This technology is currently being used in museums to make paintings alive. An example will be this Kult’s Emoji Land where a mural is brought to life using AR done by Howie Kim.

Gif taken from

Imagine if we are able to do this but with the clothes, instead of having just particles flying around us. It also doesn’t need to be particles that escapes the garment, but moves, or can create the illusion of having the dress cave into one’s body. There will be endless possibilities as to what we can do.

The Future

More customisability will be good to the dress. If one is able to design their own effects, design what they want to show, or to use a database of patterns to reveal itself, that will be really useful to bring such wearable tech to the market.

If everyone had something like the Google Glasses, this will be a cool new way of looking at fashion. It will be even cooler if our clothes don’t have patterns at all and people are still able to see the AR effects by scanning the clothes we are wearing.

I think there’s a need for a default AR program on our phones (or an actual real Google Glasses) as more of such technology emerge. Facebook has already developed an AR program, Spark AR, that allow people to create their own AR filters. The next step is to make viewing AR much more easily.


Resource dump for Devices

This is so I dont have 5212841203 tabs open at the same time

Pitch Proposal : THIRD-I





The Interaction

  • Tracking mode
    • IR sensors will detect difference in distance
    • Motor will turn to track the object, prioritising whatever’s closest that’s in field of view
    • The info of motor angle, IR distance will be sent wirelessly to
    • These info will be received by the cowl, vibrating the corresponding vibrating motor to indicate sensor’s location
    • * A pitch will be heard on either side of the ear to determine proximity
  • * Sweep Mode
    • IR sensors will detect its surrounding like a radar
    • Motor will turn 180 degrees to and fro.
    • The info of motor angle, IR distance will be sent wirelessly to
    • These info will be received by the cowl, vibrating the corresponding vibrating motor to indicate sensor’s location
    • * A pitch will be heard on either side of the ear to determine proximity
  • Eye movement Mode
    • Motor will turn in the direction of the eye, stopping when the eye is looking forward.
    • IR sensor detects proximity
    • The info of motor angle, IR distance will be sent wirelessly to
    • These info will be received by the cowl, vibrating the corresponding vibrating motor to indicate sensor’s location
    • * A pitch will be heard on either side of the ear to determine proximity

Communication flow (1 cycle)

  • Cowl’s Arduino sends mode info to 8266
  • Cowl’s 8266 sends mode info to eye’s 8266
  • Eye ESP8266 subscribes mode info
  • ESP8266 sends the info to its slave lilypad which acts accordingly
  • Lilypad receives motor and IR distance info and sends to eye’s ESP8266
  • Eye ESP8266 publishes motor and IR distance info wirelessly
  • Cowl’s 8266 reads the info and sends to its slave Arduino
  • Arduino react its vibrating motor and pitch accordingly

What’s next?

  1. Receive ezbuy stuffs so can continue
  2. Settle ESP8266 communication
  3. Settle master-slave communication
  4. Get necessary materials for new eye that’s bigger in size
  5. BUILD THEM! And simplify the difficult parts

Human 2.0

What is Human 2.0?

Image taken from

Our affinity with technology is ever growing. As technology gets closer and closer to us, the boundaries between human and machine gets blurred. The idea of Human 2.0 is the fusing of technology within the human body as a means to augment ourselves through the use of technology in prosthesis, wearables, implants, or body alteration, bringing us a spectrum of new senses and improved function that cannot be experienced without the upgrade.


This idea isn’t new, as many Cyberpunk-styled animations, films, and games like Ghost In The Shell, Blade Runner, and Overwatch have already imagined scenarios of human-technology augmentation.

Hacking in “Ghost in the Shell”

Gif taken from

Sombra is one of the cybernetically enhanced characters in Overwatch that has augmented abilities like “hacking”

Image taken from

Joker (of suicide squad), except he isn’t trying too hard and sees in drone in Blade Runner 2049

These characters with augmented abilities can be categorised under cyborgs, which are people who are augmented to have capabilities higher than human through cybernetics enhancements. Cyborgs are a vision of Human 2.0 which is very popular in science fiction especially in the Cyberpunk genre — and it’s becoming a reality. We are now at the beginning of the transhuman era where we will all evolve alongside with technology as we fuse with it. This begun from analog augmentations like the spectacles and simple prosthetics like wooden legs. Now, we have various technology that allow us to surpass our physical limitations like controlling a prosthetic arm with our mind  and the Third Thumb prosthetic


Case Study

Image taken from

Neil Harbisson is the first person to have implanted an antenna on his head which allow him to translate colour into sound by converting light into a frequency which Neil is able to hear as a note. This was done to combat his achromatopsia which is a condition that allows him to only see in greys. With this augmentation, he is able to use technology to help him see more than what he previously lack.

I don’t feel like I’m using technology, or wearing technology. I feel like I am technology. I don’t think of my antenna as a device – it’s a body part.” He wears it to bed and in the shower.

He identify colours by first memorising and learning them. After getting used to the tones, he then start to subconsciously perceive the information. Finally, when he started to dream in colours, he felt that he officially became whole with the device, and that is when the brain and software united, turning him into a cyborg.

In this video, he explains to the audience on how his experience with the world have changed. He began to dress in terms of how it sounds rather on how it looks, and eat in arrangements of his favourite tune. He is also able to now hear colours as he is able to associate the tones with the colours that he perceive.

Harbisson’s visualisation of Beethoven’s Für Elise. Image taken from

He further upgraded that he can hear infrared and ultraviolet, as well as allow for bluetooth and wifi connection in his ‘eyeborg’.

Neil’s ability to process the new senses is possible due to “neuroplasticity”, which are ways that the brain is able to rewire itself as it tries to interpret information through a stimuli. Essentially, our senses are all the same, in which they take in an input and convert it to information for the brain to process which allows us to perceive.

How does the Eyeborg work?

Image taken from
Image taken from

The eyeborg receives lightwaves through the camera in front, which then converts the information into sound pitch from the implant within his head. The pitch is then sent to his eardrums through bone conduction.

Harbinsson’s Eyeborg have been exhibited in Singapore Art Science Museum before. The sensor is connected via wifi to allow him to sense the colours that are detected in front of the sensor in the exhibition

Image taken from

Side note, but In one of the article, I find it funny that Neil have mentioned about something which my Bart-I device deals with:

What next for cyborgism? “We’ll start with really simple things, like having a third ear on the back of our heads. Or we could have a small vibrator with an infrared detector built into our heads to detect if there’s a presence behind us.” Like a car’s reversing sensor? “Yes. Isn’t it strange we have given this sense to a car, but not to ourselves?”

“Precisely” – Me

Analysis / Critique

I think the eyeborg is only the beginning of what human upgrades can be. It is super interesting to be able to perceive something that we normally cannot. It’s like trying to imagine a colour that we have never seen before. I’m having some trouble thinking and writing this as it is hard to try to imagine what’s next in terms of the senses one can use to perceive the world.

Still, it’s good that there’s proof that humans and technology can unite as one, as long as the human have enough exposure to the technology such that it becomes an intuitive and incorporated tool they use, like an extended organ. There is still many possibilities in future for this.

However, I think it’s still a very primitive way for us to perceive. Most of these extended senses require the sacrifice or use of some other senses. For example, the eyeborg requires the user to have a sense of hearing in order for it to be used. The 3rd thumb also requires the use of big toes. These means of controlling or using the ‘new’ senses is still not perfect and could be more incorporated into the brain such that it can be directly sensed or directly controlled.


One thing I find really interesting about Neil is how he dreams in his new sense. This is very familiar to me after thinking about it for a while as I had dreams about games where the game mechanics and everything works as it is in the actual game. These mechanics in my dreams are also vivid which really emulates how I will play the game in real life. What I’m trying to say here is, by Neil’s definition, we are all already cyborgs!


There is also a question of ethics. Is it ethical to allow people to customise themselves such that they can sense new senses? Challenges that define human life can be nullified. How will we define humans then? Will we like what the future brings? Will it cause more discrimination or segregation?

I think this is not something we should worry as we will eventually adapt and like it whether or not as we’re now speaking in our perspective as non-transhumans. If something is beneficial to us as a whole, it will eventually become widely accepted.


Another con is how intrusive the eyeborg is. As it is directly connected to the brain, there are many ways the eyeborg can cause harm. If his device is hacked, he can be sent massive amount of information that can overload his head with stimuli. It can also have the potential of causing infections.

Altered ways

Theres so many ways. It’s rather about what is necessary. For Neil, it’s to fix his condition. For us, what would be more important? An extension to our sight may only be a nuisance for us as there are more things we need to take into account and we need to learn how to use it.

But objectively, alternate ways to use the eyeborg could be a 360 degree vision which allow us to see behind and beside us. That will be really good and convenient for us.

The next step

Perhaps instead of an existing sense to replace the previous sense, it’s to create a whole new sensation. If we are able to link the tech directly into the brain and create a new sensation for the extended sense, that will be cool. It’s just like how our sense of balance is something we don’t really think about sensing but our body just do. If this new sense is able to sense incoming weather, then we will just have a ‘feeling’ and ‘know’ that it’s about to rain and keep our clothes. There are so many ways!!!


Device of the Week 2 : Google Home

What is IoT?

Internet of Things (IoT) is “a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.”

Communication between machines (Machine-to-Machine AKA M2M) uses a network without human interaction by collecting information, then storing and managing it through a cloud network. An IoT device takes the concept of M2M, but connects people, systems, and other applications to collect and share data in one big web.

Image taken from

Below is a TED-ed video about IoT.

Some examples of IoT devices are:

  • smart door locks (remote control using phone, access to usage or status information for user monitoring, proximity unlocking based on user information and settings)
  • home systems (control and monitor lights, air-conditioner, water, doors, etc; personalised automation based on user information and settings)
  • Wearable devices (tracks and sends user data and alert under certain circumstances)

More here!:

18 Most Popular IoT Devices in 2019 (Only Noteworthy IoT Products)

Human-technology integration

I think IoT devices are very personalised usually and home-use is one of the most common usage. They are designed to be almost constantly interacted with, integrated into places or objects that we always use or are always exposed to. Our reliance on these devices can be more beneficial than harmful as it helps us fill in functions in our lives that improve our wellbeing and productivity. I think these devices would eventually become common to human life as technology slowly gets integrated into our bodies.

How can we design an IoT device that does more than just personalising our lives? This is one question I ask myself as I continue to think of an idea for my project… :’)


Google Home

Screenshot from

In my device of the week, I’ll be looking at Google Home. Google Home is a smart speaker that has Google Assistance installed in it. It is able to connect itself with home appliances like water sprinklers, garage doors, lights, as well as smart TVs and mobile devices. The Google Home also feels like a helpful assistance in the house as the in-built Google Assistance has a good mic and speaker system that allows itself to hear commands and give replies around the house. The assistance is able to answer questions relating to one’s personal life like “what is my schedule for today”. It is also able to answer general questions like “who is the president of USA?” and it does this through connecting itself to Google search engine. Other than these functions, the assistance can also help users do a myriad of tasks, like “play PewDiePie on Youtube on my TV” or “set alarm at 3:30PM”, or simply play music. The system also have games that can be played by a group of friends, and it’s actually quite entertaining!

Screen recording from

Some screenshots from the website for us to see its functions in a concise way:

Google Home can control many types of audios on many devices
Entertainment too!
Many other ways to customise and control your home with Google Home
Extra things that the Home provides like games!


As for interface, the Google Home is a very simple device. It only has a button, while the rest are all on the touch surface on top of the device.

Here are the interactive elements of the device:

Screenshot from
Screenshot from
Screenshot from
Screenshot from
Screenshot from

These functions are the more ‘user-interface’ kind of functions where the device used like a household product where users have to learn how to use. Other than these functions, the Google Home’s Google Assistance responds to commands like “Okay Google” and “Hey Google”. These functions are the more humanistic and intuitive, which serves more like an assistance that talks to you through a speaker. This makes the Google Home a very unique product, as it is both a product and a service.

So, we’ve gone through its uses and its interface. What is the IoT in the Google Home?

Image taken from

Google Home involves a few IoT systems that work together to form an integrated whole. The chart above shows how the system basically works. Google communicates through IoT to a cloud service that communicates back to the appliance that you wish to control. This is done using these things called “device types” and “device traits”.

“Google’s NLU engine uses types, traits, device names, and other information from the Home Graph to provide the correct intent for your IoT cloud service.”

Image taken from

Device types are names given to every device to distinguish between the different devices that are connected to the cloud. In the above table, you can see that the light is named “action.devices.types.LIGHT“.

Device traits are commands that the device type is able to obey, allowing for user control. You can’t command your light to explode, but you can certainly change the brightness of your light using “action.devices.traits.Brightness“.

After that, there is the Home Graph. It is Google’s database that keeps all your devices’ contextual information. It stores the data of the devices like its name, its location, its functions, etc. This allows Google to understand which command is meant for which device. For example, you can say “Turn off the kitchen lights“, and Google is able to map out which lights belong within the kitchen, and switch it off.

Below are some examples of those IoT processes.

  • User switching on and controlling smart appliance (eg. living room air-conditioner): User input command to mic > Command translated to data to device type and device trait in cloud > Home Graph finds the path towards the appliance > switches on appliance > Appliance sends data to mobile device via same network > device can control the air-conditioner > LOOP
  • User switching on entertainment (eg. Netflix on TV): User input command to mic > Identify device type and trait > Cloud > Home Graph > Appliance receives data and switches on the entertainment
  • User using Google Assistance (eg. Asking for internet information): User input command to mic > Assistance searches the web database > Assistance replies the most relevant and reliable answer (not sure if this process uses the cloud?)


  • Very VERY convenient
  • Intuitive and well-integrated in home and personal settings
  • Helpful in situations where one’s hand is not free, or when one needs to get things done on the go
  • Able to fit well in healthcare as its monitoring system can automate tasks in emergencies without the person even knowing first (eg. heart attack)
  • The future of home where everything is controllable without actually touching anything


  • Needs time to set up
  • May have difficulty learning to use the device without buttons
  • Privacy issues regarding a device that collects personal data
  • Not very useful if one can just use hands to operate the different devices
  • Data may be fed into an algorithm which teaches the device to learn your behaviour (privacy)
  • The AI may take over the world

An Alternate Use

I’m thinking, most IoT devices are usually home-based and personalised which works really well as personal information are most valuable in IoT devices as it can cater itself towards the personal needs of its users. What can we do to integrate IoT in non-home settings? Perhaps this can be some kind of service AI that helps people navigate around a mall like a directory. It can also help users book slots or book tickets for certain shops. This is still very personalised, although I have transformed it into a public service.

Another way to use the device is perhaps in militaries where it is able to give real-time feedback from the sensors inside a (say F-16 Jet). As fighter jets are always under high stress, a lot of issues will happen to it. It takes a team to download data from within the jet after every day (or after every fault) to determine its flight data and where the fault is located. The use of IoT would make everything much easier such that, as soon as the plane touches down, it connects to a network that allows the new Google Home to communicate with IoT devices inside to checks for the fighter jet’s internal systems.


Image taken from

The Google Home is not just an IoT device that very nicely integrated the technology to do a myriad of things that is super beneficial to us all, but is also an AI that humanises the experience of using such a device. That gives a user-centric experience which allows it to perform its function seamlessly and intuitively. Overall, Google Home is a great example of an interactive device that fulfils everything that should exist in an interactive device.


BART-I (Butt Eye)

Back Awareness Response Transmitter I

This interactive device makes use of 2 Infrared sensors to detect presence of obstacles from behind, particularly useful in places where you may potentially block someone’s path (narrow paths, etc). It can also be used as a defensive too for early warning of someone sneaking up on you.

How it works:

  1. Wearer will wear this like a belt.
  2. Switch to switch on
  3. If something is in the proximity of 1 of the sensor (eg. the left side), the left vibration module will start to vibrate
  4. Depending on the proximity, the vibration varies. As the strength of the vibration cannot be controlled, I only controlled the frequency of vibration in relation to the proximity. The closer something is, the more frequent it vibrates. This is similar to car reversing system.
  5. If the obstacle is in between both sensors, and both sensors pick up the obstacle, both will vibrate indicating the obstacle being in the middle. But the threshold for this is too low (I think) and as such it isn’t very sensitive.


  1. the vibration is very strong so it can be uncomfortable, and that cannot be controlled.
  2. if the wearer is leaning against the wall, it will keep vibrating. To solve this, one can simply switch it off
  3. Form not fitting, could be more flexible and concave rather than convex


Template for the form

Putting it together into a half-sphere

Trying out the sensors


Putting the IR sensors on

Putting the switch and the battery pack together

The final form!!!


Base system:

How the vibration module and IR sensor working together

This is some extra stuff I did to make the vibration not repeat itself if its within the same range for at least 1 cycle of the code. Doesn’t work well so I didn’t use it.


Multimodal Experience

Butt Bell

Eyes on the back / Back-awareness Warning

I was burnt out after a series of try-harding at making everything good because #newsemnewme, but after 2 weeks of being uninspired, I’m feeling better now and I think I’m going to go with a less tedious project.

While in my burnt out phase I made this mindmap:

I kind of broken down the types of messages there can be, as well as what forms a wearable can take. I was stuck with the idea that a notification must be from a phone.

During the lesson last week, we were introduced to the 8.5 head human figure, which allows us to accurately depict our device on a human body.

Along with the lesson, I was also told that the notification can even be something that’s outside of the mobile phone. Things like a touch or press of an object that can be translated into a vibration on another person’s forearm.

With that, I felt more at ease that there are more ideas to explore. So I started this new mindmap

Very unclear, what I highlighted are:

  • money transactions
  • water plants
  • knock on glass window
  • every website visit / instagram
  • friend proximity
  • reverse / who’s behind (eye on butt)

I found out that notifications should always be personally catered to an individual. The individual should care about the notification and not just something that is sent for a cause. As such, the ones that I highlighted are the more feasible ones (except for the one I striked out, which I included at first thinking it will be nice if it can work together with my Interactive Spaces project)

The Device


I was interested in the idea of having a sensor on the back that can detect people that are behind us. It works as some kind of rear mirror for people, a 6th sense for us to ‘see’ what’s behind us. I like it as I think it’s interesting to think of ways to enhance our senses, and wearables are perfect for that. Also, I found way too many times that I have to pull my friend when he/she is blocking someone from passing.

My idea of a wearable is going to be a belt which sends a signal to the shoulders. My rationale is that a belt will be stable and inconspicuous. The belt will also allow the sensor to always be facing the back of the person as that’s the only side of the body that it can face. The shoulder, as the vibration should feel like a tap on the shoulder to prompt the person to move away when there is something or someone behind him or her. The notification will also tell the user whether the incoming object or person is coming to the left or the right side.

A quick sketch of how this will look like in comparison to the 8.5 head model

The Interaction

So how this device should work is, if something is in the vicinity, at least 1 sensor should pick it up. The arduino will then compare both ultrasonic sensors: if one side is higher than the other, it will notify the user on the correct side. If both side detects a similar output (maybe within a certain range), both shoulders will vibrate to alert that the object or person is directly behind. The intensity will be determined by the closeness. The sound will come in the form of short beeps with decreasing intervals as the gap gets smaller, just like a car when in reverse.

Problem: if the user is leaning onto something on purpose, the device will keep beeping and vibrating. To solve this, I would make it such that the user can switch the device off at anytime, so as to prevent disruption or disturbance.

Something more advanced… If I can do it

I wonder if I can change the idea to the arduino detecting change in speed (acceleration) of oncoming ‘entities’, basically detecting change in environment so static environment will not trigger the system. This will solve the problem mentioned above. We shall see.

Device of the Week 1 – Health

Automated External Defibrillator

Screenshot taken from

According to this article, an AED is:

a lightweight, portable device that delivers an electric shock through the chest to the heart. The shock can potentially stop an irregular heart beat (arrhythmia) and allow a normal rhythm to resume following sudden cardiac arrest (SCA). SCA occurs when the heart malfunctions and stops beating unexpectedly. If not treated within minutes, it quickly leads to death.

How an AED work is… (here’s a video but I will also explain through text)

First, responders have to open the case to retrieve the AED, and in doing so, alerts a nearby medical response team to the location of the AED. The responder have to bring the AED to the victim and open the AED up and switch it on. The AED will guide the responder to what to do and the responder just have to follow the instructions step-by-step. There are adhesive electrodes in the AED, which, when pasted on the victim, detects the victim’s heart rate which will help the responder assess when defibrillation is needed. If a shock is required, the AED will charge up and notify the responder to press the button to deliver a shock. The shock will momentarily stun the heart, which causes it to stop for a brief moment. This helps to ‘restart‘ the heart so it will begin to beat effectively again. Afterwhich, CPR should be performed to prolong the heartbeat until medics arrive.

The AED is a really frightening yet fascinating device. It is meant to save a life, and yet, it is for public use for when emergency arise. The amount of responsibility this device give to its users is very, very great, despite how easy one is able to access and use it.

Device Breakdown

At the most basic level, the AED must be able to perform its main objective which is to defibrillate, and as such requires hardware that delivers the required electricity and a pulse detector for feedback. In order to perform its function as a public life-saving device, it needs to be portable, as user-friendly and intuitive as possible so that even the most unskilled person can attempt to use in an emergency. It must also be able to deploy quickly and teach its user as quickly as possible. For, according to this source, 7-10% chance of a person surviving cardiac arrest is lost in every minute of delay.

We can analyse how these points fare in the AEDs with reference to the image below:

Image taken from
  1. The AED has 2 obvious buttons that stands out first, the green being the switch and the red being to deliver a shock. These buttons are easy to spot for most people that operates a modern device and as such, allows people to recognise and operate efficiently. The buttons are also very spherical which allow for a quick and easy press.
  2. The image on the AED clearly tells the user on where to place the shock pads. The pads themselves also have images on them to show how to use which allows the responders to use it quickly.
  3. There seems to be a “PULL” label tells one to pull the case open, although the directional arrows are not very strategically placed. If not for the cut in the glass case, I would have tried pulling the part surrounding the label itself. That is not good interface design. This compartment probably holds the shock pads which are taken out to display in this image.
  4. The overall form of the AED is very compact and hand-held friendly.
  5. Although not seen, the AED probably have vocal instructions that people can follow, which I think helps a lot as we want to be told what to do during a stressful situation.
  6. However, it could be better as there are no text instructions as some responders may not be able to hear clearly or are deaf. Text can be simplified to easily communicate during high stress situations, but it may affect the compact form of the AED.

Overall, the device has a good form and easy to understand interface that allows the AED to perform all of its function as effectively as possible. Even though the process puts the responders at high stress and is fairly complicated to use, its interface allows people to use it quickly and correctly, saving many lives. So I would say that it has successfully concisely packaged the entire defibrillator into a portable public tool, transferring the medic’s responsibility to the public where a member of public can help save a life just as easily as he/she operates a toilet bowl. The article below is an example of how an AED can change the lives of many.

Civilians Help Man With Heart Attack In HDB Flat; Prove AEDs Can Help Save Lives

If anything were to be improved on, it would be to add text instructions for the hearing impaired. Perhaps another point of improvement is the inclusion of a female image alongside the male one, or to have an androgynous person so people will be able to respond to female victims confidently as well. See this link to why we should not always use a male body as reference to design.

My Suggestion

I do not wish to disrespect the invention of the defibrillator but I like the idea of using a defibrillator to do the opposite of what it is designed to do. As the device is designed to prevent death, what if it causes death instead? But of course, not to humans, but to mosquitoes / flies / annoying bugs.

Imagine a portable bug killing device where you go around zapping them! Would anyone be afraid of a cockroach now? Could it be turned into a game? It will definitely not be ethical as it can be misused GREATLY so it’s probably not a good idea. Still, it will be cool to own a smaller non-lethal voltage ‘stunner’ that can stun or kill insects.

I really can’t think of anything else that a defibrillator can be modified into…



Colour-to-text converter

How the device works

This device is an analog machine that provides full tactile and visual feedback through the use of dials, a button, a toggle switch, an LED strip, and computer screen. The device ‘converts’ coloured light into text. Users can turn the dial to alter the colours’ red, green, and blue values manually. After selecting a colour they like, they are to press the red button to ‘save’ the colour. The machine can save up to 10 colours. The users can then send the message to the computer by pushing the toggle switch down. This results in the machine entering ‘transmit’ mode, and promptly repeats the selected colours on its screen. Meanwhile, the ‘translation’ happens in real time, displaying the translated words on the computer screen. Users can then press the red button again to repeat the transmission. Users can then flick the switch back up to return to ‘record’ mode.

In a nutshell:

In record mode (toggle switch up)

  • dials: control RGB values on the LED
  • button: record (up to 10) colours

In transmit mode (toggle switch down)

  • dials: does nothing
  • button: repeat transmission

*Note: the words at the toggle switch and buttons do not mean what they mean now as the concept is changed. Switch ‘up’ should mean “input” and switch ‘down’ should mean “transmit”, and the red button should be “record”


Thanks Fizah for helping me take these SUPER HIGH RES PICS!!! And Shah for playing with the model!

Why this device?

In the context of a trade show, I would use this device as an attractive ‘toy’ to attract people to come to the stall. The manual control of LED allow visitors to test the strip’s potential while also having fun.


I wanted to make this as I was really excited to do this when I had this idea. I also wanted to improve my workflow and the way I treat school projects. I was also deeply inspired by the group last year that did the Light Viola, and also Zi Feng in the way he document and work on his assignments. After this project, I found that I have a real interest in making physically active and immersive experiences, things people can play around and fiddle with.


I started with simple sketches to illustrate my idea. I had a few different ideas. The one that stuck with me was the current setup with a different rule: the device remembers an input every 0.3seconds, and the button sends all the words together in a string.

The sketch developed to this in class:

After the sketch and some consultation with Galina and Jeffrey, I went around finding scrap materials. Before this, I also went on a trip with Shah, Joey and Elizabeth to look for components and bought an LED strip (forgot what type it is), 3 knobs, 6 variable resistors, a button and a toggle switch. So with these, I pretty much knew what I wanted to do.

I went on to make the model. I found a large matt sheet of acrylic and decided that it was a good material for my casing. I lasercut it after making measurements to ensure that every component can be installed in. I then bent it using hot gun and a table (unfortunately there is no documentation of that). After that, I went to spray paint the case and the other parts using the left over spray paints I had last few sems when I was in Product Design.

For the case, I spray painted from the back side so the front remains matte and the paint won’t be scratched. The bottom piece was sprayed black in the end to fit the aesthetics better. The other parts are sprayed in their respective colours.

This is how it looked like after installation. YAY!

So after I was satisfied with the cover, I went on to work on the code. It was a long process of editing, adding, testing.

All my previous files. I save them separately with each big edit in case I needed to return to the previous one

The first variant of my code is like my original idea, which works like this:

  • switch is on
  • input from the dials are recorded every 0.3s
  • each individual inputs will form words
  • the words form a sentence
  • press button to send the message to the computer to be displayed

With this variant, I found that it keeps repeating the words, and that the user has to turn the knob quickly enough to create an interesting mix of words. It is too restricted in general and as such I changed to the new idea which allow users to take their time to select the colours they want and send the message as and when they wish.

The basis of my final code is:

  • Fast LED libraries which runs the LED strip

Here are some examples of how fast LED works (CRGBPalette16, CRGB)

  • Timers

Timers helped me a lot in controlling the variables. The code above helps me constrain the variable ‘valuenew’ (which controls what word will be printed) during the ‘transmit’ stage. How my timers work is, for instance:

int timer 1000;

int t = 10;

timer -= t;

if(timer <= 0){

timer = 1000;


In this example, timer will minus 10 every tick, until it becomes 0, then it will reset to 1000 and repeat. If I want to control a variable, I can make something happen when timer <= 0.

How ‘valuenew’ controls the words printed
  • Modes

The above screenshot is an example of a mode. Each variant of ‘memory’ stores a set of instructions, and these set of instructions will play when the mode is changed.

Other ways I used mode is as such:

int buttonpress = digitalWrite(button,HIGH);

if(buttonpress = 1){

mode = 1


if (mode == 1){

Serial.println(“something should happen here”);


In this example, if button is pressed, the mode will switch to 1, which causes the serial monitor to print “something should happen here”. It’s a very versatile way of coding although it can be very clunky in a complex code.

In Processing

The code in Processing is very simple. It’s basically receiving inputs from the serial monitor and displaying it on the screen as texts. There is only 1 problem: if there is nothing in the serial monitor, it will print ‘null’ which causes an error “null pointer exception” which crashes the program. What I did to counter it is to only run the program if there is no ‘null’s.

Physical Model Part 2

I continued with my model while working on the code, as a way of me taking a break from constantly pulling my hairs out with codes. I went on with using foam to cover up the rest of the form as it is a versatile material for my oddly-shaped case. It also fits the aesthetics. I also used a white corrugated plastic sheet as a screen as I wanted a diffused look.

I forgot to mention wiring. So I soldered the wires onto the components and learnt the best way to solder through Zi Feng. It was amazingly easy after he shown us the magic!

Wiring isn’t that complicated as it looks. It’s just because I used the same wire for every component. The casing really helps a lot to keep things organised.

This video was recorded in my Instagram story last weekend, when it was pretty much finalised.

Photo Documentations