Information Arts: Intersection of Art, Science and Technology

This encyclopedia-like book by Stephen Wilson investigates the relationship between art, technology, research, and science to discover that they are interconnected in many ways.

A brief introduction

Technology is always associated with science, and science is always associated with the frontiers of technological advancements. But this notion of the specialisation of roles in science and art is a idea that only started since the Renaissance. Art making and scientific research actually goes through similar methodologies, as well as pushes the boundaries of technology. For the longest time, people have always been creating and inventing, figuring out how something works before knowing why it does. This intuitiveness and creativity generate new ways of using technology, while a deeper understanding of its mechanisms further pushes it; all in all improving and generating new technology. In this manner, art and science worked hand-in-hand in the creation of technology. Are we able to break away from the notion of art being “creative” and science being “technical”? This book aims to address that issue and see how art and science is coming together in the information age.

The book

Included in the book are some of the best research-inspired artworks that Wilson believes to be thought-provoking and revolutionary, in hopes to challenges our notion of art and science.

Wilson explains the relevant ideas in understandable chunks in the introduction, followed by a categorisation of works based on a group of topics (eg. “Biology: Microbiology, Animals and Plants, Ecology, and Medicine and the Body”). Within the categories, Wilson further explains information that is relevant to the topics and lists a few artists that uses such ideas.

Example, in the Biology category, Wilson explains Bionics and stated examples like a nerve chip that Stanford researchers created which reads nerve signals, decodes them, and operate prosthetics. Wilson then goes further in depth to look at individual artists what delves in different aspects of biology. For example, in the “Medicine and the Body” subcategory, he lists down notable artists like Stelarc, Marcel.li Antunez Roca, ORLAN, and their notable works with a brief description and other relevant information.

In “Third Hand”, A manipulable robotic arm is attached to the body activated by the host via EMG (sometimes from other body areas) or tele-operated by others. Images taken from https://stelarc.org/?catID=20265

Personal thoughts

I had no time to read through everything, but I was really interested in many of the examples and ideas he listed, especially under the “Medicine and the Body” section as that is within my current interest. It is a very comprehensive and informative book which talks about a really relevant topic in our current time.

I am also interested to learn more about what he said about science, art, and technology. I guess I find it relatable as pop-science (despite its bad reputation as being too watered down) really inspires me. Video channels like VSauce and Kurzgesagt shaped my ideas and thoughts to where I am now, and I love to base my project and works around these ideas.

Source:

Wilson, Stephen. Information Arts: Intersections of Art, Science and Technology. Cambridge, MA: The MIT Press, 2002.

DoW 3: Metaphor: AudRey

Image taken from https://www.wearablemedia.co/audrey

AudRey is a garment piece that analyses its wearer’s Instagram presence and reflects it in a ‘digital aura’ that surrounds the user in augmented reality. 

The aura (3D AR particles) emitted from the dress symbolises one’s digital presence, as it uses IoT services to interprets information in the users’ Instagram feed in terms of the colours, comments, and likes.

This combination of augmented reality and fashion explores the potential of wearables in the future where the virtual world and reality meets.

Image taken from https://www.theverge.com/2018/4/14/17233430/wearable-media-fashion-tech-nyc-ceres-jumpsuit-interactive
Image taken from https://www.wearablemedia.co/audrey

The augmented reality is coded within the patterns on the garment which will reveal itself after scanning the garment with the custom made app made using Unity and Vuforia API. In the app, the patterns will appear to leave the garment and orbit around the user. The garment is made by heat transferring vinyl on neoprene textile, fastened by 3D printed fasteners. 3D textile printing technology.

Pros

  • Cool dress that you can wear to dress well with an added ‘twist’
  • Allows one to be aware about their social media presence and wear it out like a badge
  • Opens up a new layer of fashion and self expression using technology

Cons

  • Unaccessible to many if the effect is only restricted to their custom made app
  • Patterns all have to be unique in order for personalisation
  • User themselves cannot see the effect
  • Does it work on mirrors?! (Don’t think so? The patterns will be flipped)

Don’t know if pro or con

  • The inclusion of AR using a custom mobile app makes the effect non-intrusive as nobody else but the user (or people with the app) is able to see your data

Alternate uses

This technology is currently being used in museums to make paintings alive. An example will be this Kult’s Emoji Land where a mural is brought to life using AR done by Howie Kim.

Gif taken from https://www.howiekim.com/others

Imagine if we are able to do this but with the clothes, instead of having just particles flying around us. It also doesn’t need to be particles that escapes the garment, but moves, or can create the illusion of having the dress cave into one’s body. There will be endless possibilities as to what we can do.

The Future

More customisability will be good to the dress. If one is able to design their own effects, design what they want to show, or to use a database of patterns to reveal itself, that will be really useful to bring such wearable tech to the market.

If everyone had something like the Google Glasses, this will be a cool new way of looking at fashion. It will be even cooler if our clothes don’t have patterns at all and people are still able to see the AR effects by scanning the clothes we are wearing.

I think there’s a need for a default AR program on our phones (or an actual real Google Glasses) as more of such technology emerge. Facebook has already developed an AR program, Spark AR, that allow people to create their own AR filters. The next step is to make viewing AR much more easily.

Links

https://www.wearablemedia.co/audrey

https://www.theverge.com/2018/4/14/17233430/wearable-media-fashion-tech-nyc-ceres-jumpsuit-interactive

https://www.howiekim.com/others

Resource dump for Devices

This is so I dont have 5212841203 tabs open at the same time

http://www.ia.omron.com/data_pdf/cat/ee-sy671_672_ds_e_6_3_csm486.pdf

https://github.com/jgoergen/CamBot

https://www.freetronics.com.au/blogs/news/arduino-eye-tracking#.Xa3NtZMzYWo

https://create.arduino.cc/projecthub/H0meMadeGarbage/eye-motion-tracking-using-infrared-sensor-227467

https://www.pololu.com/product/2458

https://ezbuy.sg/product/51000774099086.html?keywords=Reflectance%20Sensor&baseGpid=51000774099086&pl=2&ezspm=1.20000000.22.0.0

https://ezbuy.sg/product/168986219.html?keywords=Reflectance%20Sensor&baseGpid=168986219&pl=1&ezspm=1.20000000.22.0.0

Pitch Proposal : THIRD-I

https://oss.adm.ntu.edu.sg/a150133/category/17s1-dm3005-tut-g01/

https://unnecessaryinventions.com

 

Updates:

Design:

Tests:

The Interaction

  • Tracking mode
    • IR sensors will detect difference in distance
    • Motor will turn to track the object, prioritising whatever’s closest that’s in field of view
    • The info of motor angle, IR distance will be sent wirelessly to Adafruit.io
    • These info will be received by the cowl, vibrating the corresponding vibrating motor to indicate sensor’s location
    • * A pitch will be heard on either side of the ear to determine proximity
  • * Sweep Mode
    • IR sensors will detect its surrounding like a radar
    • Motor will turn 180 degrees to and fro.
    • The info of motor angle, IR distance will be sent wirelessly to Adafruit.io
    • These info will be received by the cowl, vibrating the corresponding vibrating motor to indicate sensor’s location
    • * A pitch will be heard on either side of the ear to determine proximity
  • Eye movement Mode
    • Motor will turn in the direction of the eye, stopping when the eye is looking forward.
    • IR sensor detects proximity
    • The info of motor angle, IR distance will be sent wirelessly to Adafruit.io
    • These info will be received by the cowl, vibrating the corresponding vibrating motor to indicate sensor’s location
    • * A pitch will be heard on either side of the ear to determine proximity

Communication flow (1 cycle)

  • Cowl’s Arduino sends mode info to 8266
  • Cowl’s 8266 sends mode info to eye’s 8266
  • Eye ESP8266 subscribes mode info
  • ESP8266 sends the info to its slave lilypad which acts accordingly
  • Lilypad receives motor and IR distance info and sends to eye’s ESP8266
  • Eye ESP8266 publishes motor and IR distance info wirelessly
  • Cowl’s 8266 reads the info and sends to its slave Arduino
  • Arduino react its vibrating motor and pitch accordingly

What’s next?

  1. Receive ezbuy stuffs so can continue
  2. Settle ESP8266 communication
  3. Settle master-slave communication
  4. Get necessary materials for new eye that’s bigger in size
  5. BUILD THEM! And simplify the difficult parts

Human 2.0

What is Human 2.0?

Image taken from https://www.scientificamerican.com/article/human-2-0-tech-upgrades-for-the-nervous-system-cartoon/

Our affinity with technology is ever growing. As technology gets closer and closer to us, the boundaries between human and machine gets blurred. The idea of Human 2.0 is the fusing of technology within the human body as a means to augment ourselves through the use of technology in prosthesis, wearables, implants, or body alteration, bringing us a spectrum of new senses and improved function that cannot be experienced without the upgrade.

 

This idea isn’t new, as many Cyberpunk-styled animations, films, and games like Ghost In The Shell, Blade Runner, and Overwatch have already imagined scenarios of human-technology augmentation.

Hacking in “Ghost in the Shell”

Gif taken from https://giphy.com/explore/sombra

Sombra is one of the cybernetically enhanced characters in Overwatch that has augmented abilities like “hacking”

Image taken from https://vocal.media/geeks/blade-runner-2049-has-a-villain-problem-and-we-need-to-talk-about-it

Joker (of suicide squad), except he isn’t trying too hard and sees in drone in Blade Runner 2049

These characters with augmented abilities can be categorised under cyborgs, which are people who are augmented to have capabilities higher than human through cybernetics enhancements. Cyborgs are a vision of Human 2.0 which is very popular in science fiction especially in the Cyberpunk genre — and it’s becoming a reality. We are now at the beginning of the transhuman era where we will all evolve alongside with technology as we fuse with it. This begun from analog augmentations like the spectacles and simple prosthetics like wooden legs. Now, we have various technology that allow us to surpass our physical limitations like controlling a prosthetic arm with our mind  and the Third Thumb prosthetic

 

Case Study

Image taken from https://en.wikipedia.org/wiki/Neil_Harbisson

Neil Harbisson is the first person to have implanted an antenna on his head which allow him to translate colour into sound by converting light into a frequency which Neil is able to hear as a note. This was done to combat his achromatopsia which is a condition that allows him to only see in greys. With this augmentation, he is able to use technology to help him see more than what he previously lack.

I don’t feel like I’m using technology, or wearing technology. I feel like I am technology. I don’t think of my antenna as a device – it’s a body part.” He wears it to bed and in the shower.

He identify colours by first memorising and learning them. After getting used to the tones, he then start to subconsciously perceive the information. Finally, when he started to dream in colours, he felt that he officially became whole with the device, and that is when the brain and software united, turning him into a cyborg.

In this video, he explains to the audience on how his experience with the world have changed. He began to dress in terms of how it sounds rather on how it looks, and eat in arrangements of his favourite tune. He is also able to now hear colours as he is able to associate the tones with the colours that he perceive.

Harbisson’s visualisation of Beethoven’s Für Elise. Image taken from https://www.theguardian.com/artanddesign/2014/may/06/neil-harbisson-worlds-first-cyborg-artist

He further upgraded that he can hear infrared and ultraviolet, as well as allow for bluetooth and wifi connection in his ‘eyeborg’.

Neil’s ability to process the new senses is possible due to “neuroplasticity”, which are ways that the brain is able to rewire itself as it tries to interpret information through a stimuli. Essentially, our senses are all the same, in which they take in an input and convert it to information for the brain to process which allows us to perceive.

How does the Eyeborg work?

Image taken from https://alchetron.com/Eyeborg
Image taken from https://alchetron.com/Eyeborg

The eyeborg receives lightwaves through the camera in front, which then converts the information into sound pitch from the implant within his head. The pitch is then sent to his eardrums through bone conduction.

Harbinsson’s Eyeborg have been exhibited in Singapore Art Science Museum before. The sensor is connected via wifi to allow him to sense the colours that are detected in front of the sensor in the exhibition

Image taken from https://amanz.my/2017138797/

Side note, but In one of the article, I find it funny that Neil have mentioned about something which my Bart-I device deals with:

What next for cyborgism? “We’ll start with really simple things, like having a third ear on the back of our heads. Or we could have a small vibrator with an infrared detector built into our heads to detect if there’s a presence behind us.” Like a car’s reversing sensor? “Yes. Isn’t it strange we have given this sense to a car, but not to ourselves?”

“Precisely” – Me

Analysis / Critique

I think the eyeborg is only the beginning of what human upgrades can be. It is super interesting to be able to perceive something that we normally cannot. It’s like trying to imagine a colour that we have never seen before. I’m having some trouble thinking and writing this as it is hard to try to imagine what’s next in terms of the senses one can use to perceive the world.

Still, it’s good that there’s proof that humans and technology can unite as one, as long as the human have enough exposure to the technology such that it becomes an intuitive and incorporated tool they use, like an extended organ. There is still many possibilities in future for this.

However, I think it’s still a very primitive way for us to perceive. Most of these extended senses require the sacrifice or use of some other senses. For example, the eyeborg requires the user to have a sense of hearing in order for it to be used. The 3rd thumb also requires the use of big toes. These means of controlling or using the ‘new’ senses is still not perfect and could be more incorporated into the brain such that it can be directly sensed or directly controlled.

Dreams…

One thing I find really interesting about Neil is how he dreams in his new sense. This is very familiar to me after thinking about it for a while as I had dreams about games where the game mechanics and everything works as it is in the actual game. These mechanics in my dreams are also vivid which really emulates how I will play the game in real life. What I’m trying to say here is, by Neil’s definition, we are all already cyborgs!

Ethical?

There is also a question of ethics. Is it ethical to allow people to customise themselves such that they can sense new senses? Challenges that define human life can be nullified. How will we define humans then? Will we like what the future brings? Will it cause more discrimination or segregation?

I think this is not something we should worry as we will eventually adapt and like it whether or not as we’re now speaking in our perspective as non-transhumans. If something is beneficial to us as a whole, it will eventually become widely accepted.

Intrusiveness

Another con is how intrusive the eyeborg is. As it is directly connected to the brain, there are many ways the eyeborg can cause harm. If his device is hacked, he can be sent massive amount of information that can overload his head with stimuli. It can also have the potential of causing infections.

Altered ways

Theres so many ways. It’s rather about what is necessary. For Neil, it’s to fix his condition. For us, what would be more important? An extension to our sight may only be a nuisance for us as there are more things we need to take into account and we need to learn how to use it.

But objectively, alternate ways to use the eyeborg could be a 360 degree vision which allow us to see behind and beside us. That will be really good and convenient for us.

The next step

Perhaps instead of an existing sense to replace the previous sense, it’s to create a whole new sensation. If we are able to link the tech directly into the brain and create a new sensation for the extended sense, that will be cool. It’s just like how our sense of balance is something we don’t really think about sensing but our body just do. If this new sense is able to sense incoming weather, then we will just have a ‘feeling’ and ‘know’ that it’s about to rain and keep our clothes. There are so many ways!!!

Links

https://www.forbes.com/sites/cognitiveworld/2018/10/01/human-2-0-is-coming-faster-than-you-think-will-you-evolve-with-the-times/#200d67ef4284

https://www.theguardian.com/science/2019/jul/13/brain-implant-restores-partial-vision-to-blind-people

https://www.theguardian.com/artanddesign/2014/may/06/neil-harbisson-worlds-first-cyborg-artist

https://interestingengineering.com/the-transhuman-revolution-what-is-it-and-how-we-can-prepare-for-its-arrival

Our Glass Castle, Their Grave : The Complete Collection of Concepts, Thoughts, and Feedbacks

Disclaimer: This post is written in a train-of-thought manner with my 1 remaining brain cell so I’m sorry if it’s boring to read. I’ve highlighted the main points as much as I can hopefully it helps.

Birds are susceptible to collision with glass windows, especially if they are too clear, or if they reflect into the sky. ADM is a hotspot for bird collisions due to the highly reflective glass. According to Nanyang Chronicles, 2018, the glass windows in ADM are responsible for almost 1 collision per day.

Birds like to smack here.
Image taken from http://www.nanyangchronicle.ntu.edu.sg/News/2504bird.html

NTU sees a large number of bird species as it is surrounded by forests that have abundant food and resources, said Mr David Tan, an avian ecologist who has collected more than 700 carcasses of birds killed in local building collisions over the past five years. He uses the carcasses for research and analysis on the phenomenon.

NTU has many glass-panelled structures, with more than 95 per cent of its buildings in line with the Building and Construction Authority’s Green Mark scheme. Glass panels are common in green-marked infrastructure as they allow natural light to come through, reducing the need for electricity.

While such designs help to lower energy use, environmental policies often fail to consider biodiversity, said Campus Creatures member Gina Goh, 24, who leads NTU’s bird-building collision patrol team.

  • – From Nanyang Chronicles, 2018

This installation aims to engage the public in creating awareness and solutions towards making the ADM building bird-friendly, hopefully prompting the building manager to take action.

This installation appears invisible to an unobservant eye, and its layers serve as transparent obstacles that birds encounter everyday in their flight. This supposed obstacle (if one were to not pay attention) is a reflection of what the birds face and allows us to empathise with them. This is mostly inspired by the saran wrap pranks which people do on both their pets and people:

Inside the installation is a space that is both confining and transparent, which creates a conflict between being in the open and being confined in a small space. This is also a direct reference to the birds flying into the Sunken Plaza in ADM where they are trapped in the enclosed plaza with the reflected illusion of the sky.

The window stands right in front of the visitors of the space where a clear view of the entire Sunken Plaza can be seen. Here is where markers are placed on the window, and round stickers are placed in containers on the ledge, prompting them to paste a sticker onto the marker to make their mark in preventing birds from colliding with the glass. The stickers will be 2cm (subject to change) in diameter and spaced 5cm apart to create a grid that allow birds to see that there is some kind of obstacle for them. This video is an example of the grid:

 

The stickers are also in primary colours much like Yayoi Kusama’s Obliteration Room, which I believe will beautify the space and allow for the participants’ expression of creativity in regards to where they place them and how they place them in relation to the other colours.

Image taken from https://www.arch2o.com/obliteration-room-yayoi-kusama/

I want the visitors to paste a sticker for each bird they see flying in and around the plaza. I would like the visitors to paste the sticker according to what they feel about the bird, and they are allowed to paste wherever they want on the grid. In order to fully interact with the work, they have to observe outside the window. However, visitors may still peel a sticker and paste it for fun, which I will allow since it is freely up to the visitors to use the space.

I have considered making the sticker-pasting to be free from the grid, as I thought the birds just require something pasted on the window to see that there is an obstacle, it does not matter if there is a grid at all, there just has to be something on the window. However, I think, a grid will make more sense as a whole to nudge the management to apply the idea, and makes the installation more specific which is always better than a vague one. Other than that, I don’t want my work to be like the obliteration room. I would say, if the stickers are all filled, I will allow the participants to overlap the stickers such that it still forms a grid, but it does not go out of the pattern too much.

At the end of the day, the installation should look like an art piece by itself, filled by the public.

The aim of the sticker-pasting is mostly to educate people that such solutions exists. Ultimately, it is up to the management to do something about the bird problem. A really easy and elegant solution is to paste the grid stickers on the outside of ADM’s windows, which, if done properly, will not obstruct the view of people, will not make the building ugly, will not block light or reduce the amount of heat reflected, and will be friendly to wildlife. With this, I really hope my installation will create such impact.

A newly integrated idea is to add paper birds inside the installation. These paper birds come from templates that participants can use to make their own birds. They are then allowed to colour it using markers and personalise it, as well as add their own message to it. The participants can then hang it on a string using clips to clip them on. The birds will be left hanging and thrown into a bin within the installation the next day, ‘delete’ or ‘killing’ the birds. This is a reference to how the 1 bird have been dying everyday due to window collision in ADM. When the installation ends, I would like to collect the birds to create a more permanent display, before recycling them.

The purpose of crafting and personalising is to create a sense of attachment between the participants and their birds, such that they will feel like it is a pity that their birds will be discarded. This prompts the participants to be more protective towards the birds and take more responsibility of the issue rather than just being a passive by-stander of the issue.

The birds will also be the physical representation of the birds that interacted with the glass window. Visitors can interact with the bird by swinging them, taking photo with them, or looking at how each bird is personalised. I hope that it will be heavy enough to make a ‘thud’ sound if a participant swings too hard and hit the birds against the side walls. This will make the installation feel more impactful.

I am greatly inspired by Siah Armajani’s Sacco and Vanzetti Reading Room #3 ever since our visit to the exhibition as I really liked his idea of a space being not just meaningful but also functional as a communal space. I want my space to be the kind of public installation that is useable and functional. I imagine it as ‘interior design with interaction’. That is the ultimate form for my installation.

Outside of being a space for people to learn about the birds, I want the space to be a contemplative space for general use too. I want the space to be a quiet and calming space for people to just relax and look out of the window. However, with the implementation of transparent walls, I’m not sure if I can achieve that as the transparency makes the space not-so-private. We shall see, but that is not the primary concern, as the most important thing is about the birds.

Problems or things that can be done better

If I had more information about the types of birds that ADM encounters (as I did not have enough time to observe them and I can’t find relevant information online), I would make this installation specific to them. To solve this, I used different coloured stickers.

The work is time-based so there are many layers that will unfold over-time so I can only explain during presentation.

Further research other than what I’ve discussed before:

The top 4 most vulnerable-to-collision migratory species are the blue-winged pitta, yellow-rumped flycatcher, western hooded pitta and oriental dwarf kingfisher.

Bird collisions were most frequent during the fall, or autumn migration period between October and November compared to any other period during the migratory season for 2014/15 (Figure 1).

Image taken from https://singaporebirdgroup.wordpress.com/2015/05/15/migratory-bird-collisions-in-singapore/
Image taken from https://singaporebirdgroup.wordpress.com/2015/05/15/migratory-bird-collisions-in-singapore/

As one can see, the west is the second most common collision area, which is HERE! We need to start being bird-friendly!

Image taken from https://singaporebirdgroup.wordpress.com/2015/05/15/migratory-bird-collisions-in-singapore/

Migratory Bird Collisions in Singapore

Installation Process

After a few brain storming sessions (with myself) I came up with some sketches to add on to my previous idea.

A brief pencil sketch of the installation setup
Top-down plan view to plan out how I want people to navigate through the installation
The plan view that I settled on
The final sketch of how I want the installation to look like. Note there are still some changes so not exactly final but… its close enough for me to work with!

After the sketches, I went to buy the materials. I bought a 7m by 1.8m long 0.3mm transparency sheet from CN Canvas and Hardware which costed $46 (ouch…). The transparency was a bit dirty and creasy so I had to make use of what I have.

I then tried to setup. Initially, I was planning on using the windows at Level 1 lobby area due to there having high traffic. Here are my tryouts with tapes and very quick setups. I also used emergency blankets which I thought was an interesting material as it has both reflective and transparent qualities (just like the windows!) but it does not fit the aesthetic. Referencing Peter Zumthor’s Atmospheres, in order to make my space work well, the main points that the space need to have are compatible materials, tension between interior and exterior, and intimacy.

markings for me to take note. This was actually done wayyyy before the current plan
I like that it is actually quite transparent even though the material has a lot of creases
With the emergency blanket, it looked too reflective on one side. Too silver

Love that it looks almost invisible here.

I began experimenting with how I should put the emergency blanket to make it interesting. I dropped the whole idea afterwards as it is too constrasting with the materials I use.

 

I also tried using strings and realised how fishing line is able to help me hold up the transparencies reliably.

I then began to scout for a better location as I dislike having 2 big holes sticking out that tells people that my installation is there. I want it to be as invisible as possible.

I found the staircase beside level 2 to work very well as it provides a much higher ground than level 1, which give visitors much more things to look at. It is also situated directly at the middle of the sunken plaza, which is better. Also, there is a mark left behind by a fallen bird on the window. I deduced this was from a bird last year that fell onto the shelter at basement after it collided with the window.

A better view

I begin my work:

Here are some photos of my setup. It’s not very easy to see, which is the main point and I think the stairs eased my setup by a lot.

That was the end of my set up trial. I moved on to working on the other parts of my installation, mainly the print-outs and the paper birds.

My instructions panel to guide visitors on what to do inside the installation
My poster. I just thought it would really be nice to have this so…. hahahhahahahahahahahahahhahahaa
My origami bird template which I will cut for participants
cardboard bird is my first prototype and that’s when I thought, why not paper.
my original idea was to make a soft and fun to play bird but it’s not gonna work out….
ah, so beautiful. This works really well and allow for a lot of customisability.

So, with everything I need and every test tested, I will now set up the installation and hope it does not crash during presentation. Stay tuned.

The Installation

The Stickers

FEEDBACKS:

  • My friend Clara gave me a suggestion to making it about the mood of the birds as the colours of the bird wasn’t very strong and obvious. I liked that and took that in. However, I think there is some more depth that can be added to the colouring of the birds.
  • The colourfulness is kind of offputting, mostly because it was messy, but also because it makes the whole installation feel cheery which was not really the point.
  • Very little birds so its a very slow process.
  • The dots look a bit disorienting, commented by Shah. Perhaps I can use that as a way to further disorientate visitors to sell the idea of the disorientation. Through looking at optical illusions. But of course, with similar gap arrangements.

Birds:

FEEDBACKS:

  • The birds are well-received as it was a fun activity for most.
  • People spend a lot of time on it so it crowds the place up a bit quickly
  • the hanging birds, to Clara, felt like they are desperately looking to exit the space. However, to prof. Biju, they looked happy. Perhaps theres a need to find ways to make the birds feel more like they are trapped.

The Space:

FEEDBACKS:

  • The space felt calming despite it supposed to feel like its uncomfortable
  • The transparency kinda works as it’s safer that it’s visible.

OVERALL THINGS TO IMPROVE

I think as a whole, I need to be more specific about the feeling i wish to incite in my installation. Now, I am too focused on the activism of the project than on the experience. That blinded me a lot as I was only trying to make the space functional and not how it feels. I don’t like art. Especially when it’s art that does not add value. I want to say that out loud first as I think that influenced a lot of my decisions. I want to make something that make people act.

It’s safe to say, the installation is very intuitive to navigate through, as observed with different people who entered and did everything I wanted correctly without asking me anything. With that aside, I can now focus on experience. The idea also sells, I believe, as many are shocked to realise what I tell them, and are sympathetic towards the birds.

However, I still need to work on what exactly I want them to feel, and the effectiveness in making people feel those feelings. What I’m missing out now is the discomfort. This is a bit conflicting as I previously mentioned that I wanted the space to also be a nice space to chill at.

To combat these problems, a few solutions can be explored:

  1. Make the space more cramp, so if theres more movement, people will get more uncomfortable. But if one person goes in and look out the window without moving, they should feel ok.
  2. Make the walls more illusion-like, so participants feel uneasy looking at the walls, but feels ok looking at the window. This is where I can add projection to improve my project.
  3. Make the birds red, as suggested by Khairul, as it will be associated with death and blood.
  4. Make the stickers of reddish hue. So it will also add on to the redness, hopefully making people uncomfortable.
  5. Throw the birds on the floor instead of using a bin. This makes the birds pile up over time, which adds to the discomfort. This also works better as symbolism for the death of the birds. I will still get the participants to hang the birds as that creates the idea of attachment that I want. I also find it very odd if participants were to just throw it to the floor directly as it makes it a very destructive act which is… just odd for me. Also, what I realise also is, day and night have different moods. Perhaps the installation will look more sinister at night ( which is where I test out most of the stuffs) so it feels different.

Anyway, I tried this after the presentation and I feel that it becomes much more meaningful afterwards:

Finally, how can I augment my work digitally? Here’s my idea dump

  1. Use projection on my transparent walls to make them more illusionary, and I can use a few ways to do this.
  2. use projection on the floor to portray dead birds, and every sticker pasted, a bird vanishes. This is a bit too direct for me, but still an idea that can be entertained.
  3. Make the hanging birds spin? As if the birds are trying to fly out of the plaza, but that will be noisy. Will be interesting if the bird flies out of the clips as they swing.
  4. Make the birds drop over time using a mechanism that times it.
  5. Using Augmented Reality to depict the birds or some other things on the stickers or birds or something
  6. Crowdsourced images of dead birds can be projected
  7. YOLO to detect birds?
  8. ultrasonic sensor glass hit
  9. what if the birds are all digital and people release the birds, only for their bird to hit the window and die
  10. Project dots on sunken plaza exterior and with each visitor’s exit, 1 dot is added or coloured

Final words (after presentation)

I really regret messing up my presentation and there’s no way to reverse that. After regaining composure and realigning my mind, I feel more confident in working forward with the feedbacks I have. I just hope to be clearer with where I’m headed with this project and hopefully produce a project that is more meaningful and less pretentious.

UPDATES: Enhancing with Digital

I have an update to my idea. Here’s what I have.

I decided to use more space, expanding my installation to cover at least 3 window panes. Will also split the installation up into 3 parts.

Materials to help prevent bird death:

https://www.birdscreen.com/index.php

https://www.birdsavers.com/

https://www.collidescape.org/

Information:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5756612/

https://www.straitstimes.com/singapore/environment/collision-into-buildings-cause-of-many-birds-deaths-study

https://pdfs.semanticscholar.org/4f2c/634fc135e9186be1a0d538767f8c1a529bbd.pdf

https://www.fws.gov/migratorybirds/pdf/management/reducingbirdcollisionswithbuildings.pdf

https://www.fws.gov/birds/bird-enthusiasts/threats-to-birds/collisions/buildings-and-glass.php

http://tinyurl.com/SGBirdCrash

 

Device of the Week 2 : Google Home

What is IoT?

Internet of Things (IoT) is “a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.”

Communication between machines (Machine-to-Machine AKA M2M) uses a network without human interaction by collecting information, then storing and managing it through a cloud network. An IoT device takes the concept of M2M, but connects people, systems, and other applications to collect and share data in one big web.

Image taken from https://internetofthingsagenda.techtarget.com/definition/Internet-of-Things-IoT

Below is a TED-ed video about IoT.

Some examples of IoT devices are:

  • smart door locks (remote control using phone, access to usage or status information for user monitoring, proximity unlocking based on user information and settings)
  • home systems (control and monitor lights, air-conditioner, water, doors, etc; personalised automation based on user information and settings)
  • Wearable devices (tracks and sends user data and alert under certain circumstances)

More here!:

18 Most Popular IoT Devices in 2019 (Only Noteworthy IoT Products)

Human-technology integration

I think IoT devices are very personalised usually and home-use is one of the most common usage. They are designed to be almost constantly interacted with, integrated into places or objects that we always use or are always exposed to. Our reliance on these devices can be more beneficial than harmful as it helps us fill in functions in our lives that improve our wellbeing and productivity. I think these devices would eventually become common to human life as technology slowly gets integrated into our bodies.

How can we design an IoT device that does more than just personalising our lives? This is one question I ask myself as I continue to think of an idea for my project… :’)

 

Google Home

Screenshot from https://store.google.com/product/google_home

In my device of the week, I’ll be looking at Google Home. Google Home is a smart speaker that has Google Assistance installed in it. It is able to connect itself with home appliances like water sprinklers, garage doors, lights, as well as smart TVs and mobile devices. The Google Home also feels like a helpful assistance in the house as the in-built Google Assistance has a good mic and speaker system that allows itself to hear commands and give replies around the house. The assistance is able to answer questions relating to one’s personal life like “what is my schedule for today”. It is also able to answer general questions like “who is the president of USA?” and it does this through connecting itself to Google search engine. Other than these functions, the assistance can also help users do a myriad of tasks, like “play PewDiePie on Youtube on my TV” or “set alarm at 3:30PM”, or simply play music. The system also have games that can be played by a group of friends, and it’s actually quite entertaining!

Screen recording from https://store.google.com/product/google_home

Some screenshots from the website for us to see its functions in a concise way:

Google Home can control many types of audios on many devices
Entertainment too!
Many other ways to customise and control your home with Google Home
Extra things that the Home provides like games!

Interface

As for interface, the Google Home is a very simple device. It only has a button, while the rest are all on the touch surface on top of the device.

Here are the interactive elements of the device:

Screenshot from https://store.google.com/product/google_home
Screenshot from https://store.google.com/product/google_home
Screenshot from https://store.google.com/product/google_home
Screenshot from https://store.google.com/product/google_home
Screenshot from https://store.google.com/product/google_home

These functions are the more ‘user-interface’ kind of functions where the device used like a household product where users have to learn how to use. Other than these functions, the Google Home’s Google Assistance responds to commands like “Okay Google” and “Hey Google”. These functions are the more humanistic and intuitive, which serves more like an assistance that talks to you through a speaker. This makes the Google Home a very unique product, as it is both a product and a service.

So, we’ve gone through its uses and its interface. What is the IoT in the Google Home?

Image taken from https://medium.com/google-developers/iot-google-assistant-f0908f354681

Google Home involves a few IoT systems that work together to form an integrated whole. The chart above shows how the system basically works. Google communicates through IoT to a cloud service that communicates back to the appliance that you wish to control. This is done using these things called “device types” and “device traits”.

“Google’s NLU engine uses types, traits, device names, and other information from the Home Graph to provide the correct intent for your IoT cloud service.”

Image taken from https://medium.com/google-developers/iot-google-assistant-f0908f354681

Device types are names given to every device to distinguish between the different devices that are connected to the cloud. In the above table, you can see that the light is named “action.devices.types.LIGHT“.

Device traits are commands that the device type is able to obey, allowing for user control. You can’t command your light to explode, but you can certainly change the brightness of your light using “action.devices.traits.Brightness“.

After that, there is the Home Graph. It is Google’s database that keeps all your devices’ contextual information. It stores the data of the devices like its name, its location, its functions, etc. This allows Google to understand which command is meant for which device. For example, you can say “Turn off the kitchen lights“, and Google is able to map out which lights belong within the kitchen, and switch it off.

Below are some examples of those IoT processes.

  • User switching on and controlling smart appliance (eg. living room air-conditioner): User input command to mic > Command translated to data to device type and device trait in cloud > Home Graph finds the path towards the appliance > switches on appliance > Appliance sends data to mobile device via same network > device can control the air-conditioner > LOOP
  • User switching on entertainment (eg. Netflix on TV): User input command to mic > Identify device type and trait > Cloud > Home Graph > Appliance receives data and switches on the entertainment
  • User using Google Assistance (eg. Asking for internet information): User input command to mic > Assistance searches the web database > Assistance replies the most relevant and reliable answer (not sure if this process uses the cloud?)

Pros:

  • Very VERY convenient
  • Intuitive and well-integrated in home and personal settings
  • Helpful in situations where one’s hand is not free, or when one needs to get things done on the go
  • Able to fit well in healthcare as its monitoring system can automate tasks in emergencies without the person even knowing first (eg. heart attack)
  • The future of home where everything is controllable without actually touching anything

Cons:

  • Needs time to set up
  • May have difficulty learning to use the device without buttons
  • Privacy issues regarding a device that collects personal data
  • Not very useful if one can just use hands to operate the different devices
  • Data may be fed into an algorithm which teaches the device to learn your behaviour (privacy)
  • The AI may take over the world

An Alternate Use

I’m thinking, most IoT devices are usually home-based and personalised which works really well as personal information are most valuable in IoT devices as it can cater itself towards the personal needs of its users. What can we do to integrate IoT in non-home settings? Perhaps this can be some kind of service AI that helps people navigate around a mall like a directory. It can also help users book slots or book tickets for certain shops. This is still very personalised, although I have transformed it into a public service.

Another way to use the device is perhaps in militaries where it is able to give real-time feedback from the sensors inside a (say F-16 Jet). As fighter jets are always under high stress, a lot of issues will happen to it. It takes a team to download data from within the jet after every day (or after every fault) to determine its flight data and where the fault is located. The use of IoT would make everything much easier such that, as soon as the plane touches down, it connects to a network that allows the new Google Home to communicate with IoT devices inside to checks for the fighter jet’s internal systems.

Conclusion

Image taken from https://www.dropbox.com/s/4ry5ugkhxik1v9u/Screenshot%202019-10-06%2003.03.51.png?dl=0

The Google Home is not just an IoT device that very nicely integrated the technology to do a myriad of things that is super beneficial to us all, but is also an AI that humanises the experience of using such a device. That gives a user-centric experience which allows it to perform its function seamlessly and intuitively. Overall, Google Home is a great example of an interactive device that fulfils everything that should exist in an interactive device.