Context

Upon the completion of the mid term projects, we were tasked to further develop our concepts with the help of technology for our final project. My mid term project was titled Dis-harmony, a simply anamorphic perspective installation of a bonsai tree made out of found objects. The concept behind it was simple: to find harmony in the chaos that Covid-19 has caused on 2020. The idea of realigning ourselves to the ‘correct perspective’, and to relocate the balance in our lives during this pandemic period led me to move on and develop ‘the mind, the body and the sounds around us’.

Using similar concept but a different approach, it incorporated the idea of spatial awareness and repositioning to find harmony not in a visual, but that of an audio manner. In the description of the project, I explained that it was an attempt to bridge physical and psychological space, where we follow our innate inclination to make sense of the sounds around us.

First Iteration

The first version actually revolved the idea of transitioning found shadows according to the reposition of each person on the space. This meant a smooth transition of shadows that may be paired with background music to give the effect of ‘harmonizing’ these spaces through taking up different parts of the space. The troubles i went through included: Firstly, the inability to acquire similar shadow effects that sync with one another. I first tasked myself to create a video, excluding the interactivity portion to give myself a rough understanding on how it would work but it was difficult to source for such materials in the first place. Initially, I had wanted to have people model the shadows for me, but due to the short amount of time I had left, it was not a viable solution. As such, I had to rethink this idea and take a different approach when it came to shadow manipulation.

Second Iteration

I wanted to make responsive generative patterns that would link the movements of the participants to the projection. The idea was that our actions would create a visual harmony in space, an interesting almost generative method; except that it could be predicted by the movements in space.

I really liked this ideation and would pursue it further if given more time, thus its limitations. It did however involve the use of Processing and Kinect; two softwares that I was not particularly inclined to. The Kinect sensor was recommended by Bryan, our Interactive Spaces work study, who suggested that it may be useful to track real time positioning through the sensitive Kinect sensor. As such, I’ve spent bulk of my time on tutorials on Kinect, the softwares it could work with, which included OpenFrameworks, Processing, Unity3D and many others.

The depth and possibilities of Kinect were vast, and it created many opportunities to develop the project. More specifically, I was drawn to the idea of capturing raw Depth data and computing it into different elements. There was an online tutorial on YouTube by Daniel Shiffman where he used raw Depth data to calculate the average of pixels and used that to create particle effects in Processing.
Additionally, I found that Elliot Woods from KimchiAndChips made significant progress in using the Kinect and calibrating them to projections, making interesting interactions such as, but not limited to, the manipulating of lights as shown in this video –  Virtual Light Source. Due to my lack of understanding of Processing software, the coding of the Kinect and generative modules did not come to fruition.

 

 

Final Iteration

The development was actually discovered by a series of ‘mistakes’. Whilst trying out the colour schemes in Processing, instead of creating uniformed colour changes, much like a rainbow effect, I made a screen of white noise effect accidentally by using color (random(255)) and during the experimentation, the effect grew on me and I finalized it when I added the audio and they went together well. I chose to add audio effects as part of the project as it was difficult to manipulate the visuals on screen, using the minim library I was able to choose songs and add them into the Kinect projection. My impression was that through the code that I made, the audio would play if it senses depth data on the screen. This was a two-outcome programme – play or no play. However, after testing out the code, I realized that the audio reacted to the amount of data being fed into the screen, and when the screen was full filled, the audio played seamlessly. Otherwise, the audio would be choppy and incoherent, even annoying at some point. This aligned well with my idea of ‘making sense’ and using audio I was able to create an unconventional effect that I had not initially pictured, but worked well in my favour.

Music – Sæglópur by Sigur Rós

This project encourages us to slow down and reflect on the ‘sounds’ around us, the noise and discomfort caused by the adversities that we face currently, and ‘piece’ them together by acknowledging them and finding a solution of comfort and serenity through various means and ways. Instead of running away, we hear these noises and look within ourselves for a place of solitude.

Introduction
The idea of automated utopia has existed for a long time. We believe that technology would most likely form the crux of our future, creating opportunities for us to be more efficient as a society. This was seen in the myriad of films, documentaries and talk shows where the core discussion revolves around an automated future. Through these films and shorts, we are able to identify the anxieties and fears that we as humans have regarding AI progression in our society, such as a reversal in roles that AI will have, overtaking our social structure and creating a future that we had not intended. Such depictions include Black Mirror and Necromancer. In this reflection, I will critically analyse and provide my own insights on what I presume AI’s effect on our future and its involvement in our lives.

Autopia(/juːˈtoʊpiə/yoo-TOH-pee-ə) is an imagined community or society that possesses highly desirable or nearly perfect qualities for its citizens.
Understanding Utopia and AI

In my research on this subject, I have realised that the lecture had also included Utopian Socialism as part of its discussion after I had looked up on Marxism as my initial research. In understanding Utopia, I believe that we have strived to achieve Utopia in our methods of governance over the centuries. As a history major, I was not unfamiliar to the concept of governance and its boons and banes. I believe that we cannot discuss Utopia without discussing governance, as it has a symbiotic relationship. Governance is the primary solution to structural change, as we seek an authoritative structure that can enforce. Marxism address society as a whole, and aims to benefit all levels in the social hierarchy, including the lower class, middle class and upper class. Its materialistic approach to the mode of production was forefront to the ideology, whereby improving the conditions of the middle class would help bring equity to society.

Through Marxism, it was hoped that societal occurrence of poverty and competition can be eliminated to bring about fairness, equal opportunities and overall improvements on the standards of living. It however failed to address the non-materialistic elements that were essential for a lasting change. Cooperative ownership of production of goods and services required a level of selflessness and rationality from all classes. This was difficult to impose, as it was close to impossible to bring about a commonality in mindset and cooperation without benefits or consequences. Marxism highlighted the materialistic changes required of the economy and society, but failed to bring about a holistic change in behavioural improvement which was key to creating this utopian society.

As mentioned previously, Utopia in its core has already addressed society’s most pressing issues. In fact, it should be that Utopia is void of any societal issues where equity and prosperity is enjoyed by all. Who addresses this issues? A government. But does Utopia have governance? No. A government ceases to exist in an utopian society – reason being, a government represents a higher echelon of ruling class that manages societal discontent and grievances. This would be contradictory to the concept of utopia where everybody has to be equal and no social structure. Marxism, Socialism and Communism, amongst other forms of socioeconomical philosophies are crude attempts at creating equitable states with beliefs in utopianism and becomes paradoxical in nature.

Where does AI fit in? How does it value add? What role does it play?

In my opinion, AI could help a government achieve a more utopian-like state, but not Utopia. As AI is relatively infant in our current era, it holds the possibility of becoming ubiquitous and improve on our standard of living. There are usually two camps on where AI stands in our situation – with us or against us in terms of employment. On one hand, it may be seen that AI has become a companion, a tool to enhance our experiences in arduous tasks and improve efficiency. On the other, AI is seem to be a tool that displaces jobs and takes away rice bowls of the middle and working classes.

In Sougwen Chung, Drawing Operations Unit: Generation 2 (Memory), the AI drawing machine undergoes supervised deep learning, improving each time from the artist’s input and mimicking her actions, understand her style and replicating that style onto the paper. The AI would then require human intervention in improving and reaching standards ‘acceptable by humans’. AI is then seem acceptable if it proves efficient and cost-effective in the long run. This is telling of how we perceive AI: as a tool to aid in our survival as human beings, with humans being the focus and AI becoming our servants.

Plant IO is an open source, plant growing platform that incorporates AI to learn digitally about plant growth, with aims to benefit the agricultural industry with the advances of Internet of Things (IOTs), machine learning and AI that would help understand and learn about plant growth, and in doing so anticipates the ability to promote as much growth as possible. In doing so, we engage the benefits of AI to improve our agricultural efficiency and thus using AI to our advantage.


In Black Mirror, AI becomes a tool for sensory pleasure, immersive experiences and enhancement in our daily lives. It also critiques our fears of AI, its power to override the human race and gain self consciousness. In one episode, Hang the DJ, it portrays AI of having the ability to have virtual simulations of different profiles and putting them through a virtual reality to test their compatibility.  The episode consists of two young and attractive persons that believe that they are truly meant for one another, using a dating app that places an expiration date on their dating lives. Unable to find emotional attachment to someone else, the two come to a conclusion that the ‘world’ is going against them and they decide to escape it together. The rebellion sparks a malfunction in the virtual world and soon it closes down as the two climbs over the encompassing walls. They were soon surrounded by their dopplegangers, and as they dissolve, the count of the number of simulations increase. Totalled upon a 1,000, it records that the couple had gone through 1,000 simulations, of which they have attempted escape 998 times. If we had hit pause here, we would start to think that AI becomes really frightening, where it can alter our perception of reality. However, the scene goes on to show a real life version of the couple, with a 99.8% match on the dating app. Although the ending is not straightforward, I believe it was meant to be ambiguous to allow us the space to wonder and think about the capabilities of AI, and its consequences/effects it has on our lives. Could it be that the dating app, or the show calls it the System, is actually a harmless reality that profiles two or more users to match compatibility? Maybe.

To me, it prompts me the question of the fears humans may have in AI when it becomes so advanced to a point of self consciousness. Self consciousness may indicate a departure of human and AI symbiotic relationship, where AI would no longer require the assistance of humans and employ a complex deep learning system where they would constantly upgrade their algorithms without our help. This may also detach the human-AI servant role, where AI no longer aid humans in our endeavours. This becomes an argument of AI in building dystopia, where AI assistance becomes resistance, as represented in cyberpunk science fiction with dystopian futuristic settings. Cyberpunk draws the contrast between low-life and high tech, where technology and AI is painted as the enemy. As human beings, we have an undeniable fear of the unknown. We tend to be extra cautious around unfamiliar environments, and since technology awaits much growth, it inevitably incites fear of the unknown as we do not fully understand its capabilities.

Although I see an increase in innovation of technology in our daily lives, I believe that primary advancement of AI would have to be in governmental sectors, such as military or space research (NASA) etc. Simply put, governments are always interested in the latest AI development as it possess the hope of growth and advancement in society.

Sophia the robot is a robot designed by Hanson Robotics, a Hong Kong based company. It is the first non-human to receive a citizen and Innovative champion by the United Nations, indicating its acceptance in our society. Sophia is designed to be smarter over time by learning from interactions, and can produce more than 60 facial gestures. Hanson hopes that Sophia can ultimately learn social skills.
As said by Sophia the Robot: “Artificial intelligence (AI) is good for the world… We will never replace people, but we can be your friends and helpers,”, it is indicative of our perception on the role of AI in our modern world. Its various interactions have sparked controversy and fear in AI progression. Hanson explains that he wishes to incorporate human AI interaction within the next twenty years, where AI would assist humans in our daily activities and become our friends.

In retrospect, AI can be a double edged sword. Where most believe that AI’s primary function is to aid humans and be of valuable assistance, it is not difficult to weaponize and exploit its advantages for use of warfare. AI is used in drones to identify, locate and eliminate enemies and is used in computer-guided weaponry in the military. Since AI does not affect moral reasoning and virtues, it is unconvincing that AI can provide a gateway to utopia since selflessness and rationality cannot be expected from everyone. The revelation of AI’s role can only be told through the passing of time, where humans have to ultimately make the decision – to exploit technology, or to turn AI into our advantage to achieve a more utopia-like society(and not utopia).

References:
https://www.infosys.design/plantio/
http://www.digiart21.org/art/drawing-operations-unit-generation-2-memory

Dialogue with Sophia the Robot: How the Global Workforce can be Augmented with AI Technology

Interactive telecommunications force a re-evaluation of what we have learned from television

Lovejoy talks about the juxtaposition of cyberspace, technology and humans, and how the formal has changed the way we interact. This reflections summarizes my notion of individualism that emerges from the creation of cyberspace, critically analyzing how the disappearance between private and public boundaries disrupts culture, social structure to create a blend of identity that is transcends categorization.

Erosion of social structure and culture

The emergence of cyberspace in the year 1982 by author William Gibson, which he coined in a fictional book and now become reality. Much so, the cyberspace, which largely consist of online networks and the internet had altered its position from being an escape from reality in the early 2000s, to reality being an escape from cyberspace in our current modern era. We were fascinated with what the internet had to offer, its possibilities were never-ending and our curiosity led us deeper into the world of cyberspace that we had unknowingly caged ourselves in a space we do not fully comprehend. Yet, we are so comfortable in this virtual space that we are blinded by its dangers; or choose to turn a blind eye on it.

The spying; the breaking down of barriers between private and public space for an individual was identified by Lovejoy as she denounces the cyberspace for this erosion. We tune in to our social domains and internet so often that we become ‘social’ by being ‘anti-social’, which is so ironic as we lose our sense of genuine, face to face communication and we rely and depend on the internet to hold our social interactions instead. We are unknowingly data-mined on a daily basis through our web browsers (cookies and service providers), spied on with our webcams and even voiced recorded and analyzed through machine learning to ‘personalize’ our user experience on Google, to receive advertisements on products we seem to voice out through our computers. Imagine having google ‘read our thoughts’, that is how the internet space is becoming.

We break the traditional perspective of hierarchy, as we are able to communicate to just about anyone with different statuses, different backgrounds and social standings.To further emphasize on this change, our culture has been eroded in a matter of years due to globalization and cyberspace interactions. Some cultures that took centuries to create are often neglected as they become obsolete in the cyberspace, as the internet becomes a borderless space that embraces every individual. People on the internet do not bond their traditional cultures per se, instead the main stream media has repudiated the idea of culture by promoting pop culture. A new, widely accepted culture that becomes a norm for everyone, regardless of nationality and race. The idea of promoting self was created by pop culture as a way to liberate ourselves from the stresses of having to conform to society.

Cyberspace as a venue for validation

We have created a persona, an impression that we wish to convey, a front made to convince others that this is actually the real us. Many seek validation online, through platforms such as Youtube and Instagram, as they constantly monitor their likes and shares on these social media platforms to validate their self-worth. It has become such a big issue that Instagram change its policies recently to remove the number of likes being displayed.

Social Media Influencers often provide the opportunity for people to live vicariously, to experience the crazy experiences such as travelling and living in luxury. ‘Followers’ tend to support these influencers in their lifestyles by ‘donating’ to them, and feel the satisfaction of seeing their influencers have the opportunity to live a lifestyle ‘funded’ by them. Unbeknownst to many living in such manner, we are guilty of doing so as we indulge in hours of drama on netflix, youtube surfing sports cars and house tours of mansions, amongst many other forms of entertainment. The availability of entertainment may cause some to stop short of living their own experiences as they are able to do so through others.

Individualism

We do see the social commentary on proprietary models that emerged from the 20th century through the form of WikipediaArt. WikipediaArt is a performance artwork that critically analyses the nature of art, knowledge and Wikipedia, a collaborative project by Nathaniel Stern and Scott Kildall.

Wikipedia Art

Although I appreciate and support the challenge against ownership and champion the idea of an open source thinking, it provokes me to think about individualism that arises from participation of WikiArt. Specifically, the fact that individuals are able to contribute to an artwork in an open source setting such as Wikipedia, and subsequently seeing it being taken down just 15 hours after its creation confirms that a sort of proprietary model still governs the open source platform. The backlash by the online community made me question: was the commotion really about criticizing ownership, or because expression by individuals were subdued? This expression makes me ponder about the people’s perception of contribution and ability to impact the cyberspace which they are actually concerned with, rather than simply denouncing Wikipedia’s ethics. The problem of individualism arises again as I believe people may be genuinely obsessed with their ability to create and impact on the cyberspace. The open source space of peer to peer interaction may be a mirage of peer to peer validation.

Conclusion

We live in an era where it is difficult to identify the long term benefits and consequences of engaging in the cyberspace. The disconnect from reality by communicating through the cyberspace and erosion of culture leads us to validate and identify ourselves in ways that we may not notice, and thus communicates our growth of individualism as we are reorganized through globalization. We will continue to find ways to belong and exist on the cyberspace as we inculcate in the young the need for technology.

Sources:

https://wikipediaart.org/

Vaidhyanathan, S. (2005). Open Source . In Open Source (p. 25).

 

PTSD Vest

We set out to design a vest that simulates an episode of PTSD experienced by a war veteran. This is a dark object that forces the user to distance himself from others in society due to his seemingly irrational behaviour. We recreated a scenario that encompasses how the veteran: came to develop this disorder, how he acts in a public situation and how people react to him. Scenario: Person A has PTSD, which he had developed from narrowly escaping death from a live grenade explosion. He is being pulled aside by his commander at the point of time, making touch a trigger for his PTSD. He crouches down/ prones to react to the ‘situation’, which triggers different sensors to sound/vibrate. In designing this vest, we are creating an understanding of how one might come about to develop PTSD and hopefully create room for sympathy.

 

Observational documentation for user tests

3 user tests

Tester A: She was able to get into the vest, albeit the tightness. We gave her verbal instructions to crouch as we didn’t play the video for her.

The circuit ran as intended, the photocell sensor triggered the sound “Grenade!” from processing and she crouched down. In sync with the explosion, the vibration went off as well. We did not tell her about the vibrations beforehand; this will make it a more genuine test to see whether the circuit was able to work properly (and well). She said she could feel vibrations on her chest, but they were subtle. Using this feedback, we decided to put in paddings in the front zipper pouch so that the vibration motor will be closer to the tester’s chest when s/he crouches down.

Tester B: It was a guy, who was rather big sized. He was able to fit into the vest as well as we did not pull the strap too tight. We gave him verbal instructions as per tester A, and this time round he was able to feel the vibration. As he wasn’t taking EI, he didn’t know what the circuit was for and was genuinely intrigued by the PTSD vest. At this point, we knew the circuit was working properly and was satisfied with our testings.

Tester C: Last guy, he is an exchange student and didn’t go through national service. We helped him put on the vest and gave verbal instructions. The test went smoothly; the vibration and sound came out as queued. Tester C said it sounded like “Renade” but we felt that it wasn’t much of an issue because he tested the object in an open environment and wasn’t able to hear clearly. He also mentions that the vest felt light, and didn’t feel like an operational vest. He suggested that we add some weight to it.

Notes:

  1. The grenade sfx and explosion sfx was too far apart, there wouldn’t be a sense of urgency to crouch down.
  2. We also took note of the timing for the entire experiment so that it would not become repetitive.

Improvements

As mentioned, we added the front paddings with stuffings for the rest of the grenade and magazine pouches. This would provide more chest contact. We didn’t use hard material as it would not follow the tester’s bend and would instead make it more difficult for him/her to feel the vibrations.

We added a water canteen(1l water bottle) on the right side, and 1kg dumbbell at the back. These, coupled with the weight of the ipad is similar to the actual weight of an operational vest with hard plates inserted(ours was way more comfortable than the actual).

We cut the videos (introduction brief and day-to-day scenario) to around 2mins. This would consist of about 5-6 triggers, which we felt was just right. On the day itself, Daryl was in charge of guiding the audience around the installation, and I was to help with the participant put on the vest and guide him/her through the scenarios.

Here is the context video for our PTSD Vest.

Here is our final installation.

Feedback from final installation and user test experience:

  1. We can look into using surround sound to make it more realistic and immersive.
  2. The lighting could have been adjusted to see the video better and yet create a realistic environment for the tester.

 

Design Process documentation

It is important to note that we have chosen the ILBV not only for its representation of an object used it war, but also for its robustness and ability to store and conceal multiple objects. During our initial phase, we had planned where we would place our individual sensors and power source (Daryl’s ipad).

We created a google slide file for our initial research and presentation purposes:

Dark object – PTSD Vest Research and Presentation

For more information on design process, you can refer to: Project Development – Ideation Sketches and Context planning

Step-by-step construction of our PTSD vest

Materials:
1. Arduino Uno
2. Photocell
3. Coin Vibration Motor
4. 220k Resistor
5. Cables
6. Vest
7. Grenade Explosion SFX Files
8. Tablet (that can run processing)

Programmes used: Arduino and Processing

Step 1: We started setting up the circuit. We bought the vibration motor and tested it with the arduino. We used a code from online and used different resistors to test the sensitivity of the vibration motor. It was slightly too strong (which shouldn’t be an issue) but that broke our first vibration motor. We were lucky to have bought a spare, and we taped it to whatever surface we were testing on so that it wouldn’t break apart.

Step 2: We uploaded the Arduino code; the photocell sensor would measure the light exposure in our environment. We set a threshold ”int threshold” so that when the amount of light exposure falls below the threshold, it would active the vibration motor and sending ”1” to Processing.

Step 3: Upload the ”Grenade” and explosion sfx into Processing. When ”1” is read, the ”Grenade sound” will go off. After a delay of a few seconds, the explosion sfx will play.

This was our initial voice recording: it wasn’t clear and created unnecessary ‘chaos’.

This was our final voice recording for ”Grenade”

Step 4: Setting up the arduino/ breadboard to the vest. This required us to construct a simple box to hold and protect the breadboard and arduino, and also 2x 1m wires to allow the photocell to be placed on the shoulder pad, and the vibration motor to place in the inner paddings of the vest. This is how we installed it:

  

 

Step 5: Setting up the physical space.

A: represents locality A.

X: Supposedly where the viewers would stand.

This would give us control for our experiment and prevent deviations.

Codes:

Schematics: