First prototype of Anchorwatch (renamed cos vibrawatch sound weird) (very lo-fi)
3D model of Anchorwatch (to be 3D printed)
3D model of eye-to-eye periscope (to be 3D printed)
Very small amount of sketches to visualise Weather Sensograph, not enough concrete stuff to show everyone… :’)
Anchorwatch Prototype 1
Did a super ratchet prototype to test out and to quickly kickstart my work. I used a paper plate to cut into the shape of a watch, and used masking tape to put together the circuit.
Circuit consists of:
1x Arduino Nano
2x 3.7v 1500mAh lipo batteries connected in series
1x switch
1x vibration motor
1x 220ohm resistor (to reduce vibration strength)
Code: just a simple if else statement with a millis timer. Also ignore the values I set for minutes and seconds, I was just testing and stuck with just 5 seconds for the variable “minutes” (just to prevent confusion (like every second is actually 2 seconds!))
This is definitely not gonna work, even though it works technically. Nobody wants to wear this, and it will definitely fall apart soon. I’ve made a 3D model and will start 3D printing tomorrow to make sure I have a stronger structure for the watch so I can start wearing it proper.
Not sure how it will fit my hand, will try it out asap. Also, I realised I forgot to include a part for the switch so I guess I’ll find a way……
Anyway, to recap the idea of this watch: It’s a watch that vibrates every 5 seconds (as of now, will adjust accordingly to see what’ll work during user testing aka myself), anchoring the user to learn the interval subliminally, allowing them to eventually time the interval without any aid or using of any tools like a watch.
Eye-to-eye Periscope 3D model
I’m gonna 3D print this as well so there is a proper and strong object to hold. I think I’ll just make a proper mockup of this so I don’t lose the idea even though it’s just a sensory experiment.
More ideas! More things to start…
Weather Sensograph
So I’m starting on a weather sensing device which I’ll call Weather Sensograph for now. The idea is to help us humans to have an innate weather-detecting sense, like how the Anchorwatch helps us in enhancing our chronoception. This is done through a wifi-connected device that uses weather API data (or any other reliable source of data pls tell me if you know cos I’ve only ever used API). The device uses information from the API and converts it into kinaesthetic feedback (the feedback I’m thinking will be appropriate… for now). This new sense will help us evaluate weather better, on top of just relying on our sense of sight, smell, hearing (which won’t be applicable when we are indoors which we are most of the time).
I explained the idea to Mark and he came up with a good point, that the device will help better if we can also detect information about weather that we won’t know about without the device, for example, where the rain is, and when will it rain. I think giving access to these information will be a more interesting approach to this device, but I don’t have much idea of how I can do this as of now as I’ll need to know the direction of the weather approaching the user’s relative standpoint and where the user is facing in order for this to work. So far, API (IF IM NOT WRONG) do not provide those info.
I think one way to do this is to have some kind of a seismograph type of feedback to weather to help us sense the weather’s intensity and direction. Mark suggests that I look up hair-tension hygrometer. (THANK U MARK U R GR8 HALP)
I think it’s this thing? Image taken from https://manual.museum.wa.gov.au/book/export/html/89
Sound Watch
The idea of this is to change the way we tell time with sound rather than visuals on a clockface. Recently I saw Aisen Caro Chacin’s Play-A-Grill and thought about bone conduction.
Play-A-Grill drawing. Image taken from http://www.aisencaro.com/play-a-grill.html
I’m thinking of mapping the hour, minutes, and seconds to different frequencies or sound cues, layering them, and using bone conduction to let us hear sound. This will help us in:
intuitively tell time just with sound cues
constant knowledge of time as the sound is always playing when one wears the device
Help the visually impaired
Imagine the application once everyone learns this new way of telling time. Clocks around the world can also be tuned to the frequency, played out loud so everyone can hear it (although this will be devastating to animals that rely on the sense of hearing as well as power supply)
Money Sensor
One of the more applicable ideas I have now is this. Money has become a necessity much like food. We can feel hunger and thirst but not how much money we have. This can be something worthy to explore, but I’ll work on the top few ideas first then I’ll come to this one once I’m done.
Third-I
Looking back at my Interactive Devices project, I think I can try to simplify it and make it part of this series of devices.
A common theme?
The ideas I have so far are only connected by 1 factor: A “I think they will be useful” statement. I haven’t linked it to the big picture concept which is the cyberpunk future. I think that will always be at the back of my mind but like I said before, I’ll just start making all these ideas a reality first and see what I can do afterwards. Actually these prototypes don’t take a long time to create so I can just keep making and think along the way.
What’s next?
Finish 3D printing Anchorwatch v2, finish second prototype and start wearing it; evaluate and adjust it along the way.
Work on Weather Sensograph, and then Sound Watch, then the money sensor
Continue ideating and read journal articles when I’m burning out from making. Also when I’m free or available to multitask, I’ll watch videos and sci-fi films.
Continue thinking about concept along the way and try to tie things together cos now everything feels very not unified
Start planning to interview people: I have this idea to interview or have conversations with people who wear implants or are interested in the topic of cyborgs, futuristic concepts, speculative design.
Not done moodboard (rescheduling to another time to focus on, as it’s not important now after reviewing)
Sorted out and filtered journal papers that are relevant
Made eye-to-eye periscope, working on vibrawatch prototype
Read Sensory Pathways for the Plastic Mind by Aisen Chacin
Realised I’m doing something very similar to what she is doing, but with different intentions. She wanted to critique on the idea of sensory substitution devices as assistive technology, and want to normalise the use of these devices. For me, I want to go beyond normalising, I wanted to pretend that it is already a norm, and imagine what people do with it.
Next focus
Finish vibrawatch prototype
Sketching of ideas so I can present them and visualise them
Play with creating said ideas and test them
Read when I have time
Focus on sensory experiments and exploration with current few ideas
I have changed my focus from research to hands-on after talking to Bao and Zi Feng who gave me a number of good advices!
Cyberpunk
I went to research a bit deeper on cyberpunk genre. This is done through reading Tokyo Cyberpunk: Posthumanism in Japanese Visual Culture which, to be honest, was not very helpful as it was a lot of text analysing deeply on different films or animes that tackle different cyberpunk themes. What I found useful were the references to certain films or animes that I can search online for video clips or photos to refer to. This is speaking in terms of the aesthetics.
After reading the book, I realised there are more to the genre than just it’s aesthetics (I mean of course, just that I didn’t really think about it). I went to watch some videos on the topic and came up with some insights:
Cyberpunk genre revolves around themes such as mega corporations, high tech low life, surveillance, internet of things, identity in a posthuman world (man / cyborg / machine, where is the line drawn?), humanity and human the condition in a tech driven world.
The aesthetics is not a main point of the cyberpunk genre, as previously thought. It is all about the themes, that’s why many films that are “cyberpunk” may not have the iconic aesthetics of neon holographic signs, mega cities, flying cars, etc. Cyberpunk genre can exist in the era we are living in now and does not necessary have to be set in the future.
It might be more realistic to look at the current world and set the “look” accordingly rather than to refer to the 80’s vision of the future aka what most cyberpunk depiction look like.
So why are these info useful for me?
I think it’s good that I actually try to discover what cyberpunk really is and at least get a brief idea of it. I can try referencing certain themes that apply to Singapore and see what works best. How will Singapore be like if we are set in a cyberpunk world? This does not only affect the aesthetics of my installation, but it also affects the theme I’ll be covering, which can narrow my scope a bit more. In the end, I hope that the installation will be realistic and aesthetically convincing as well.
Also, I got my hands sort of dirty
I had the idea of a periscope that lets you see your own eye. This is more of a perception experiment than anything really important.
As a thought experiment, I was wondering what we will see, if one eye sees the other, and vice versa. Turns out, nothing special. You will see your eyes as though you see it in a mirror — except the mirror is not flipped. I can’t think of an application, but it is interesting to play with it. If you close one eye, you can see your closed eye with the other eye.
I show this to Zi Feng and Bao, and Zi Feng mentioned that if the mirrors are the scale of the face, you can see your true reflection with this device, with only a line separating the middle, which I think is an interesting application for a mirror that shows your actual appearance (how other people sees you).
But yeah, it’s not anything super interesting. I just wanted to start working on something haha.
Anyway, I’m also working on the vibrawatch, but it’s still WIP.
What’s next?
I’m gonna start experimenting with making sensory prototypes. I’m also starting on a device that responds to weather immediately after I’m done with vibrawatch.
I want to apologise for long post. I treat OSS as my platform for filtered train of thoughts. I’ll be updating my thoughts on my own Notion page as well, but I prefer writing here so I can write something properly that can be posted.
Progress Thus Far:
Finished reading Cybercognition (not useful)
Finished reading Sensory Arts and Design (useful)
Finished reading Design Noir: The Secret Life of Electronic Objects (very useful)
Watched Akira (not very useful)
Also, I’m tracking my own progress on Notion so I guess I’m opening it up to prof and friends here to take a look:
Reading helped me in getting a deeper understanding of the topics I’m covering. I’m clearer in what I want and what I don’t. I won’t be reviewing the books on this post, cos to be honest, some of them aren’t helpful at all and were skimmed within an hour.
The only few I found useful were a chapter in Sensory Arts and Design talking about extra senses, and many interesting concepts, approaches, and works from Design Noir. I really like Design Noir as the approach to “noir” design taken by the book really resonated in me. I think it helped me in deciding how I will approach the different devices in my FYP.
I didn’t have enough time to read, quite expected. Anyway…
Throwback to 2 weeks ago I wrote that by the end of these 2 weeks, I will:
Good grasp of the concepts I’m trying to use and understand how artists apply these concepts to their work. This is done through all the readings.
My answer: I did and I’m happy 😀
Made enough observations and have enough ideas to start creating a proper picture and narrative of what my work will be like. This is done through 2 weeks worth of observation and ideation.
My answer: I guess I did but I’m not happy with how little I have thought of
So I can say, my goals from last week were completed, even though the tasks are not.
Some thoughts probably induced by coffee
Balance out humanity with tech: why do we prefer drinking coffee than to directly inject caffeine? Isn’t it more sustainable? But drinking coffee adds so much value to our humanity because it is satisfying to drink, it encourages social behaviour, the process is enriching to our souls
What can my products do to enrich people’s souls? Or at least, retain their soul?
Complex processes help us feel productive and fulfilled. Thats why musical instruments or tactile objects make us feel good. It must be tangible.
When we can feel something, we feel connected to it. Manual car vs auto car. When someone else does it for us, its great for us in terms of productivity, but it lacks a certain value. Why we love analogue and not digital.
Then, how can we simulate an analogous response in digital context?
Also, relevant to think about when our minds do code switching. When we are on digital interface, we want whats smoothest. When we are on physical interface, we want to work for it (only it it adds value)
To sum it up: if it adds value to our life, we want to do it the manual way. If it doesn’t, we want someone else to do it for us.
Need for stimulation and simulation in order for sensory replacement to work. can be subconscious, but must include active input from the brain. This can be seen from me dreaming of Overwatch. Also can be seen in how we bring our hand up to see time, or when we adjust our spectacles, or when we wake up to turn off an alarm. It. Must. Be. Conscious.
From this thought, I kinda know that my devices have to let its user actively do something, to work for the information they want to get. That said, Vibrawatch might not be cut out for this cos it’s very passive.
Updates to Concept
New Inspiration
As the weeks go by, and as I continue reading, I realised that my focus shifts from something more technical to something more conceptual. Instead of wanting to do something relating to sensory substitution and all the specific phenomena, I started thinking further about why I want to do that, and what I’m trying to get at. I started out curious about my alarm problem, then thought about how this can affect the way we use objects to integrate with our senses, to how we can imagine this type of technology in the future that can fit our needs.
I realised that sensory substitution or addition is only a supporting part of my concept (although I still want it to be something all my devices have)
For now, I’m interested in this thing called “Notopia”.
Notopia is a term mentioned on the first page of Dunne & Raby’s book “Design Noir”. I can’t find this anywhere except a few architectural article that defines it as:
“a consequence of the cold logic of market forces combined with a disinterested populace”, “Characterized by a “loss of identity and cultural vibrancy” and “a global pandemic of generic buildings,”
In the book, it is a state of a world where we are given the illusion of technology being the solution to every problem, ‘force-fed’ by ‘corporate futurologists’. Where, as technology develops, human behaviour continues to be controlled and predictable, reinforcing the status quo of things instead of challenging them.
This idea reflects, in my opinion, the reality of consumerism now; products designed to fit our needs, and to put it crudely, to pacify us. We value convenience and ask for products that help our situation in the status quo. Realistically speaking, this works because we can’t suddenly change our behaviours.
But looking in the future, how can this affect us? Can not doing this affect us?
In the book, the solution to Notopia is to subvert the use of daily products through hacking and abusing other qualities of the products that may not be intuitive on first sight. Dunne and Raby created “Placebo Project”, 8 devices that creates a placebo effect for people to feel comforted when being around objects that give out electromagnetic waves (which many people think are harmful to us). This is done through the subversive use of everyday technologies like lamps that switches on when near heat, or compasses embedded into tables that detect magnetic influences.
The project is done with a separate intention (to study “The Secret Life of Electronic Objects” in the interactions), but thinking about it, I think there is some relevance to re-imagining new ways of using existing objects.
I thought this concept is relevant to my project, but now I’m having doubts. So let me think this over, while I go through what I know for certain I want to do.
Properties of my concept
My current space is “cabinet of curiosities” styled room of a not-so-far future individual.
The inhabitant of the room is a Singaporean youth in the not-so-far future.
The aesthetics I’m going for is inspired by the Cyberpunk sci-fi genre as it fits the themes I’m covering (post-humanism). It will be the more “hyper-city” like type of Cyberpunk, rather than the grungy kind; more “utopian”.
There will be elements of Singapore hinted in the room that creates a sense of home, but also feels foreign.
The devices I’m making will be devices for the sensory-augmented humans of the future. They must work. They must be interactive (visitors can wear them).
Updated one-sentence description
[WIP]
Keywords I will be using
Sensory Substitution / Addition
Embodied / Embedded Cognition, Enactivism
Neuroplasticity
Speculative Design
Neuro Linguistic Programming
Anchors
Critical Design
Devices
I want my devices to be non-invasive
I want my devices to have sensory elements, either in augmentation or substitution that may or may not alter perception
My device must be related to a current-day anchor, like an evolved form of it
My device have to be interactive and wearable
Goals for next 2 weeks:
Recess
Read Cyberpunk Tokyo
Read journal article and scientific articles that may help expand (I know I’m doing too much reading and too little doing. I will watch myself)
Updating, refining of concept and clear arrangement of information.
Moodboard (really start on it!)
Rationale: I just feel like I might miss some information if I don’t finish reading what I set myself to read. But yes I will definitely watch myself now and not let myself go like the last 2 weeks.
To me, establishing a clear and good foundation is important. I want to sort everything out properly as I’m still feeling uncomfortable about some parts of my concept.
Week 8
I will have a clear idea of everything by now. So clear that I’m confident.
Create first prototype of whatever product I’m making
Start ordering things I need
Finishing up all research. (this doesn’t mean I’ll stop research, just that all the back-logged research have to be completed)
Goals by end of these 2 weeks
Talk to seniors, profs, etc for feedbacks on my concept and ideas
(maybe) start thinking of interviewing people that knows the subjects I want to cover, and also people who are using devices that substitute senses
I will get my hands dirty finally
I will be able to confidently tell people what I’m doing for FYP
Long-term goal (timeline)
When all my research and ideation is done, I will be going into prototyping. During October, I will be doing a lot of prototyping and testing, hoping to get conclusive results by the end of October to go into November AKA Phase 2 for me.
In phase 2, everything will be more urgent and critical. Real products will be made. Plans for grad show will be made.
In phase 3 (February), I should have most of the stuff I need ready and working. Here is where I start doing all the refinement, writing, exhibition, competition, admin stuff, etc done.
Not much happened. Here are some small scale mockups I made to visualise the pattern to cut on the actual piece:
I used Illustrator to create some patterns. Yet to really test them out properly but I’m planning to trace these out on drafting paper (which I forgot to bring home 🙁 ) and test them instead of testing them on the actual PVC leather first because its EXPENSIVE.
Chest pieceShoulder piece and maskbeltHood and eyepieceNeckpiece
Here are some progress with the shiny PVC leather:
Hi yes I’m wearing the Mcdonald’s pyjamas
I only managed to make a mask. Put all the electronics on it and it works! Video is on my final post.
Electronics
I documented before but I’ll reiterate.
I used a code that brings info from AirVisuals, an open source API that lets you collect info from the internet.
This code collects the dataThis code prints the data out for me to see its values
It works, but as weather is always consistent, the results may not be the most exciting for a fashion show. As such, I created a virtual button on Adafruit.io to toggle the effect from “green” to “red”. Green represents healthier air quality, while red represents dangerous air quality.
Servo motor codeCode for virtual switch to work on my device
Final setup:
More to be done… This is only the main part. The motors will be on the chest piece, and I need 2 more pieces of LED strip for the shoulders.
The LED light here is for the arms, and I also need 1 more piece for the arm.
I already have the materials so I’ll continue working on it over the sem break or even, if I need to continue after sem starts.
Adafruit.io interface showing the values of pollution, temperature, and the switch position
Hi so I’m just gonna spam some photos here as a form of update. This is for documentation for myself.
MATERIALS EXPLORATION / RESEARCH
I went online to look at some materials for my armour. I want to find shiny iridescent stuffs cos I can’t get enough of it. I think it’s a weird phase and I hope it works well with my other pieces cos I really don’t know what I want and I’m just winging it with shiny stuffs…
SEE SO MANY COOL STUFFS!!
CLOTHES I BOUGHT:
Duotone Rayon (brown / green): $6/m at 90 Arab Street, Gim Joo Textiles.
Duotone polyester (green / yellow): $6/m at 90 Arab Street, Gim Joo Textiles.
Beige cloth (IDK WHAT MATERIAL): $6/m (i think) at Gim Joo also.
So I bought some of those, will keep them as samples and hopefully show my collection of samples sometime!
FOUND COOL BUG. INSPIRING MY CONCEPT!! MORE SHINY STUFFS
TRYING DIFFERENT WAYS TO COMPOSE PART 1
Didn’t like it that much. The duotone satin is too feminine and flowy, does not fit the rugged desert jumpsuit look. The rayon is nice on closeup, but from far, it looks like an ugly brown cloth. I didn’t like it. Both don’t combine well also.
COMPOSING PART 2 (digital)
I tried looking at illustrator to help me:
I think solid colours work the best. And I think beige works well.
COMPOSING PART 3
I’m indecisive. I think beige works best though. I’ll run with it first. Meawhile, shiny stuffs bought from Spotlight. $18/m 🙁
BROWN BASE:
BEIGE BASE
OTHERS (i don’t want to rotate the images anymore, takes too much work cos OSS isn’t that good at portrait photos. So i’ll have them here sideways since I’m more interested in seeing the beige and brown)
I got a lot of feedbacks that this looks StarWars-y, which I think is good and bad. StarWars kind of goes for a space nomad kind oflook?, so it kinda makes sense for my concept to have that element as well. I mean, pretty sure they have similar concept as me. I just didn’t realised it’s similar.
Also, I like shiny cloth as the base. But it’s too soft. As for green, I also really love it, but it does not fit my concept.
PAPER PROTOTYPING: ROUND 1 (pattern tracing)
ROUND 2: better sleeves, torso, copied from Muji shirt that fits me really well
MY FIRST MUSLIN PROTOTYPE IS TOO UGLY TO BE SEEN SO BYE
FIRST TRY WITH SLEEVES!!!
FINISHED FIRST PROPER MUSLIN PROTOTYPE!
WHAT’S NEXT?
I think too much so I’ll probably ramble if I keep going. I’ll just start on the actual piece this week (WK 8). Next week I’ll start on electronics (WK 9). But before really going into electronics, I still need to plan out the armour pieces so good luck to me ha ha ha. Working on the armour pieces and thinking about electronics will be week 9. Week 10 will really start working on the electronics.
Also, this weekend I’ll go buy more cloth cos I realised I don’t have enough. I may scout for new materials for my body piece cos I’m still not super satisfied with it. So I’ll probably make 2 garments if I find a new material. (lol)
AH ALSO forgot to mention that I’ll think about adding collars and pockets after I’m done with the whole thing since I think it looks really cool la. Also want to add buttons and zippers and interfacing so… IDK MAN… I’M REALLY AMBITIOUS AND THINK LIKE I WANT TO HAVE ALL THESE THINGS BUT I KNOW I CANT HAVE EVERYTHING AND IT KILLS ME INSIDE AHHHH
ok bye
edit: ok btw im watching some tutorials
and and and… i want to update some new inspos but I think that’s a lot of extra work 😛
In the future, global warming desertified most lands on Earth. Humans continue to survive in overheated cities that are covered in dust. Humans wear fully-covered suits to adapt to the dusty and hot environment, while using wearable technology to help them detect and react to the changes in environment.
[INSERT COOL NAME HERE] is a jumpsuit that uses modular pieces of wearable technology to adapted to the hot and dusty environment.
The jumpsuit offers a decent protection through the light, breathable fabric and have enlarged sleeves that allow air in. Environment sensors (Envirosensor) are worn on the arm to track the temperature and pollution levels around the user. Other wearable parts will read the sensors’ data and react according to the changes in the environment. This offers the user a functional and stylish way of navigating through this dystopian world.
Design sketch /draft (Rough)
One of my first few sketchesfurther developmentfurther development + stylingvectorising so I can create the shapes betterturns out to be super ugly so I reverted back to papernew look with scarf and new design for envirosensor. Note: The colouring is just for me to visualise the separate parts and is not the actual colour of the garment.
This was when my design was starting to take shape. With some more sketches come these:
Did some sketches of body proportions and photocopied many pieces to start mass sketching.
An even earlier sketch, wayyyyy before I changed the form into something less cyberpunk.
One of the few who can survive making it here on OSS HAHAHA its ugly.
After a few rounds of sketches, I came up with this design. Brown main colour, nylon translucent on the areas that will be attached with tech. But I didn’t really like it still.
Colour testing after visiting Chinatown’s fabric shops and discovering some sick materials. I think I’ll visit again as I still don’t like the colours here. This point was also me discovering a new form which includes the ‘winged’ look. Just some extra flabs on the sleeves on the arms and legs. I liked it. Looks futuristic, functional, and isn’t too weird or flashy.
Added some parts to the coloured piece. Still don’t like the form.
One proper drawing of the new design where I smoothed out the flabs so it looks less popping out.
Another sketch to show the different parts. Again, you can see the Illuminator is erased. Here is also where I discovered a new form for the Envirosensors. I didn’t like the puffy parts of the old sensors cos it looked out of place. I basically just took 2 different designs and mashed them together. I took a step back and just use one of the reference instead and got this. On the right side you acn also see me trying to separate both sensors to different forms so they are more intuitive to know. (the wavy striped one is for temperature, while the coral-like one with holes is for pollutant)
Was kinda trying out sketching the new parts, adding colour to just make them come out a bit more. Still don’t like the Illuminators.
Then I went to Jurong Point for my dental appointment….
This magic happened:
My brain just worked. I drew a new figure. Then for the tech, I simplified the whole Illuminator to just perform its original function: To just light up. The envirosensors have a more defined form and looks like something I can make since I can visualise it. I added a cape (which was an idea I kept thinking about) and a hoodie to complete the look. (Thanks to Jannah for suggesting hoodie!)
So far that’s what I have for sketches. I’ll update with more sketches on the tech and more views and colours for the garment. Hopefully I’ll do what I say I’ll do………
How your design is interacting?
Interaction with data: The data from the sensor changes the LED colours, as well as send signals to the other parts to react according to set data
Interaction with user: The mask uses a pressure sensor to fit itself snug onto the user without pushing onto the user too much.
Interaction between parts: From the data, the parts react accordingly.
ELECTRONICS NEEDED:
Arduino nano / lilypad x
Temperature sensor x1 (optional)
Pollution sensor x1 (optional)
Humidity sensor x1 (optional)
Light sensor x1 (optional)
LED strip m
conductive thread (many)
servo motors x4 or 6
buttons x (for simulations)
battery packs (9v) x
Pressure sensor x1
I’ll work out the individual electronics uses next post.
Focuses on next few posts:
Tech: All the different parts, sketches, the actual tech I’m gonna use, and choice + purchase + etc., prototypes (maybe)
Material: All the different material choices, colours, links for purchase (or documentations of the locations),
Prototypes: drafting paper to muslin to actual materials. Body measurements. Testing and testing…
After the last consultation, I was told to try building the space and test physically as there is no way to find out how the experience will be like without testing. However, there was a lot of difficulty in booking a space which I will briefly talk about later. I decided to drastically reduce the size of the installation after that. This is because the whole experience is clunky in my opinion and it will require too much material. But that’s also untested.
I spent >$100 the next few days buying materials like cloths and stickers, while I booked a few equipments and pillars that I can use to fix up the installation. Unfortunately, it was near crunch time for many modules so there wasn’t time to setup and test physically. From that, I learnt that I must really start testing earlier and not keep everything in my head until its too late.
I also got senior Chris to help me film some scenes with his drone! This was done as part of my previous idea of projecting greenery vs the sunken plaza’s reflective surface, which was the scrapped. (Sorry Chris! Thanks for all the trouble :’)))) ) As I don’t want to waste the footage, I’m going to put a small part here:
After that, I stopped working on the space until I was able to which was… 1 day before the presentation.
The Birds
While everything else was happening, I was trying to passively work on the project by collecting images of dead birds. There wasn’t much that I collected, which was strange (but also means it’s a good thing that lesser birds are dying haha)
The Space:
There was a lot of trouble to book the space. Firstly, Bharat was always not in, so I was not able to get approval even though I have requested for the space early. When I managed to catch Bharat early November, I was told that the space was to be shared with Prof. Joan Marie Kelly, who will be exhibiting her Painting class artworks. I had to make special arrangements with her in order to secure my space (which is that I will help her class to setup the exhibition).
The Setup + Final Presentation:
The afternoon before the presentation, I started setting up. I brought the necessary equipments and logistics down.
Initial setupAfter putting cloth
Unfortunately, I did not document the form which was shown during the final presentation as I did not really like it. I already intended to continue working on it after the presentation so I have the newer pictures.
Video documentation of walkthrough:
In the above 2 videos, the participants enter from the back instead of the front. This was due to me thinking that entering from the front was not a very good experience. The profs then tried entering from the other side and thought it was better.
During the discussion we brought up a few points:
The photos of the dead birds could replace the blood splat which was quite cheesy and doesn’t really look good in terms of aesthetics.
Going from the front is better as there is a better narrative and it is more intuitive to navigate through.
The experience worked as the impact sound and the visualisation is able to show what I wanted to show. Digital implementations helped to bring the experience to the next level which was successful.
There should be variations in the knocking sound which can make the experience more diverse. Also, the sound of the bird hitting window is not the same as just a regular knock.
The projection I shown is this. It’s a compilation video of people hitting against glass, but a picture of a dead bird found in ADM after each hit.
Further analysis
Overall, going in from the front is much better. Although there should be some kind of cue to let people know that they should not walk past the acrylic, and there should be something to distract them to slow them down. This was tested with participants before the final presentation, so that’s why I decided to let participants go from the back (which actually was not any better).
I realised that people usually stand there to see if there are more to the video on the monitor. I usually have to tell people to move on instead. So if possible, I should let participants know that they have to exit.
The sticker sticking part feels out of place now. It’s more of a personal touch than anything that is related to the installation. This is because the installation is experiential, while the sticker part is more activistic. I still kept it as I still want the idea of this artwork to not just “spread awareness”.
Finally, I realised that people don’t really look up to see the splat. This changed after I told participants about the concept before they experience it.
Further Improvements
After the presentation, I continued working on it after a good sleep (yay!).
I removed the area where the projection was and placed the projection in the middle of the “tunnel”. The projection is now projecting onto a piece of cloth which that will have to unveil to move on, which leads to the acrylic sheet.
This essentially halves the setup, which makes everything look less clunky.
Overall, this makes everything better in a lot of ways.
Navigation was easier. It was clearer for participants to understand the flow of interaction and the narrative.
The participants will now move slower in the tunnel as there is a video to watch
The whole setup is more compact and less detached
However, there are still flaws that I have to address:
the light in the projection makes the acrylic sheet visible and should be turned off when the cloth is unveiled (this was newly added after discovering this problem)
Projection on black cloth makes it not very visible (as mentioned by my friend Clemens) and the later changes, I switched to white cloth.
The visuals are still not the best, the blood splat is still very…. weird. What I did next was to add an overall red hue to make the splat less off-putting, which kind of worked in bringing attention to the screen
Clemens’s reactionThe projection kind of blinds the participants and reveals the acrylic sheet which is not good
I also created a poster that will be pasted on my installation so people will know what it is about.
Reiteration of Concept
I would like to go through one more time to summarise everything, and how all the elements worked / not worked out
The concept is from an observation of birds hitting the reflective glass window around the ADM building. Upon further research, I discovered that many birds had died due to the building’s reflective glass windows. I wanted to make an installation that solves this problem through bringing awareness to the problem, letting people know the solution, and asking the school to do something about it.
The installation features an experiential space alongside an activity. The experiential space is a long narrow “tunnel” made of white and black cloth. The use of white cloth was intended for the space to look like a funeral. The tunnel also represents the route into Sunken Plaza.
Inside the space, the first thing to see is the video projection. This projection shows found CCTV footages of people walking into glass. Each time a person walks into the glass, a picture of a dead bird found in ADM is shown. This is done to draw reference to birds flying into glass, and I want it to stir some emotions within people. Watching people walk into glass is funny. But is it funny when you see a bird dying from that? Using that, I want to create a sense of guilt and pity. The video is about a minute long and loops.
When the audience moves on, they will unveil the cloth and walk forward. This activates 2 sensors (previously only 1). 1 sensor will turn off the video that the projector is showing, making it easier for the participants to see what’s in front of them. Another sensor will activate the bird-window collision simulation. This happens on the front, which is a monitor that shows a live video of the window behind the installation, pretending to be an actual glass window. This was inspired by an advertisement by LG and another by Pepsi, which features a screen that looked like windows to trick participants. In my installation, a sound of a “bang” is heard, followed by the screen turning red and a blood splat appearing on the screen. This part is to cue the participants into knowing that a bird has hit the glass, and this let participants understand how it sounds and feel the impact.
Once that interaction is done, the participant can leave from the side, and move on to paste a sticker to ask for change.
Here are some user testing videos:
Note: she didnt notice the video and the blood splat, but was startled by the bang.
Her rewatching the video
Lessons and Reflections
I also learnt that in an art installation, I should focus more on the experience and feelings rather than facts as that is more effective in incepting ideas into people.
I also learnt that when it comes to spaces, it does not have to actually be physical space. It can be something more experiential, which I could focus on rather than creating an entire space for people to move around in. (which is costly and hard to build)
I also learnt that I should have started building much earlier and use the building as a testing ground for me to see how the experience feels.
I also appreciate the feedbacks which are all good especially Biju’s suggestions to having the glass wall that people walk into.
However, overall, I didn’t really enjoy working on this project as it requires a lot of work and money. Setting up a space is really difficult, especially with a space that is quite large like mine. Working alone on this is just not recommended. (There was once when my setup fell and I had to shout for help and the photography people came to help me I must thank them :’) )
I also lost motivation halfway through the semester as the concept wasn’t that strong in terms of the requirements of the module. Still, I’m happy that I pushed through and the installation looks fine now. I guess larger-scale installation stuffs isn’t my thing, and I should build something smaller in future.
This device is an analog machine that provides full tactile and visual feedback through the use of dials, a button, a toggle switch, an LED strip, and computer screen. The device ‘converts’ coloured light into text. Users can turn the dial to alter the colours’ red, green, and blue values manually. After selecting a colour they like, they are to press the red button to ‘save’ the colour. The machine can save up to 10 colours. The users can then send the message to the computer by pushing the toggle switch down. This results in the machine entering ‘transmit’ mode, and promptly repeats the selected colours on its screen. Meanwhile, the ‘translation’ happens in real time, displaying the translated words on the computer screen. Users can then press the red button again to repeat the transmission. Users can then flick the switch back up to return to ‘record’ mode.
In a nutshell:
In record mode (toggle switch up)
dials: control RGB values on the LED
button: record (up to 10) colours
In transmit mode (toggle switch down)
dials: does nothing
button: repeat transmission
*Note: the words at the toggle switch and buttons do not mean what they mean now as the concept is changed. Switch ‘up’ should mean “input” and switch ‘down’ should mean “transmit”, and the red button should be “record”
Presentation
Thanks Fizah for helping me take these SUPER HIGH RES PICS!!! And Shah for playing with the model!
Why this device?
In the context of a trade show, I would use this device as an attractive ‘toy’ to attract people to come to the stall. The manual control of LED allow visitors to test the strip’s potential while also having fun.
Personally…
I wanted to make this as I was really excited to do this when I had this idea. I also wanted to improve my workflow and the way I treat school projects. I was also deeply inspired by the group last year that did the Light Viola, and also Zi Feng in the way he document and work on his assignments. After this project, I found that I have a real interest in making physically active and immersive experiences, things people can play around and fiddle with.
Process
I started with simple sketches to illustrate my idea. I had a few different ideas. The one that stuck with me was the current setup with a different rule: the device remembers an input every 0.3seconds, and the button sends all the words together in a string.
The sketch developed to this in class:
After the sketch and some consultation with Galina and Jeffrey, I went around finding scrap materials. Before this, I also went on a trip with Shah, Joey and Elizabeth to look for components and bought an LED strip (forgot what type it is), 3 knobs, 6 variable resistors, a button and a toggle switch. So with these, I pretty much knew what I wanted to do.
I went on to make the model. I found a large matt sheet of acrylic and decided that it was a good material for my casing. I lasercut it after making measurements to ensure that every component can be installed in. I then bent it using hot gun and a table (unfortunately there is no documentation of that). After that, I went to spray paint the case and the other parts using the left over spray paints I had last few sems when I was in Product Design.
For the case, I spray painted from the back side so the front remains matte and the paint won’t be scratched. The bottom piece was sprayed black in the end to fit the aesthetics better. The other parts are sprayed in their respective colours.
This is how it looked like after installation. YAY!
So after I was satisfied with the cover, I went on to work on the code. It was a long process of editing, adding, testing.
All my previous files. I save them separately with each big edit in case I needed to return to the previous one
The first variant of my code is like my original idea, which works like this:
switch is on
input from the dials are recorded every 0.3s
each individual inputs will form words
the words form a sentence
press button to send the message to the computer to be displayed
With this variant, I found that it keeps repeating the words, and that the user has to turn the knob quickly enough to create an interesting mix of words. It is too restricted in general and as such I changed to the new idea which allow users to take their time to select the colours they want and send the message as and when they wish.
The basis of my final code is:
Fast LED libraries which runs the LED strip
Here are some examples of how fast LED works (CRGBPalette16, CRGB)
Timers
Timers helped me a lot in controlling the variables. The code above helps me constrain the variable ‘valuenew’ (which controls what word will be printed) during the ‘transmit’ stage. How my timers work is, for instance:
int timer 1000;
int t = 10;
timer -= t;
if(timer <= 0){
timer = 1000;
}
In this example, timer will minus 10 every tick, until it becomes 0, then it will reset to 1000 and repeat. If I want to control a variable, I can make something happen when timer <= 0.
How ‘valuenew’ controls the words printed
Modes
The above screenshot is an example of a mode. Each variant of ‘memory’ stores a set of instructions, and these set of instructions will play when the mode is changed.
Other ways I used mode is as such:
int buttonpress = digitalWrite(button,HIGH);
if(buttonpress = 1){
mode = 1
}
if (mode == 1){
Serial.println(“something should happen here”);
}
In this example, if button is pressed, the mode will switch to 1, which causes the serial monitor to print “something should happen here”. It’s a very versatile way of coding although it can be very clunky in a complex code.
In Processing
The code in Processing is very simple. It’s basically receiving inputs from the serial monitor and displaying it on the screen as texts. There is only 1 problem: if there is nothing in the serial monitor, it will print ‘null’ which causes an error “null pointer exception” which crashes the program. What I did to counter it is to only run the program if there is no ‘null’s.
Physical Model Part 2
I continued with my model while working on the code, as a way of me taking a break from constantly pulling my hairs out with codes. I went on with using foam to cover up the rest of the form as it is a versatile material for my oddly-shaped case. It also fits the aesthetics. I also used a white corrugated plastic sheet as a screen as I wanted a diffused look.
I forgot to mention wiring. So I soldered the wires onto the components and learnt the best way to solder through Zi Feng. It was amazingly easy after he shown us the magic!
Wiring isn’t that complicated as it looks. It’s just because I used the same wire for every component. The casing really helps a lot to keep things organised.
This video was recorded in my Instagram story last weekend, when it was pretty much finalised.
Keeping my researches in mind, I’ve came up with a few ideas on the jobs. I want to be as experimental and creative as possible.
Here is my list:
Job Idea 1: Dream Builder or illusionist
Using optical illusion to create 3D within 2D. The idea is that a dream builder builds a dream, so I would use this optical illusion as a foundation to whatever that is going to be built.
This is inspired by the 3D illusion artworks seen in trickeye museums and on youtube videos:
Job Idea 2: Thought Train Conductor
I merged a ‘train conductor’ with ‘thought train’ to create a Thought Train Conductor. This job is basically someone who controls a person’s thoughts by conducting the thought train. The idea is inspired by Pixar’s Inside Out
Job Idea 3: Ant Trainer
I thought it will be funny to have a trainer that control ants to make them do tricks. I got inspired by looking at animal trainers, and I stumbled upon a thing called Flea Circus
It intrigued me, it’s so weird, I feel like it made the cut for an imaginative job. So I decided to be an ant trainer instead of a flea trainer.
Job Idea 4: Mainstream Lifeguard
The idea is that a mainstream is not just what is popular, but also something like a river. To be a lifeguard at a mainstream is to be someone who rescues people from what is popular. It’s a complex idea. I would combine elements of a lifeguard and popular stuffs like social media, pop culture, popular fashion brands, etc.
Job Idea 5: Apocalypse specialist
An apocalypse specialist is a person that is an expert at all kinds of apocalypses (zombies, alien invasion, nuclear war, etc). I’d imagine taking apart the destructiveness of all sorts of apocalypses.
Job Idea 6: Singaporean Spy
I thought about the idea of a “Russian Spy” in USA, and I thought about how a spy that Singapore sends out is oddly called a “Singaporean Spy”. Sounds funny. Anyway, to be a “Singaporean Spy”, one have to be in another country, so I thought it should be Malaysia. The idea is to create an artwork that looks Malaysian, but secretly hides elements from Singapore.
Sneeze Blesser (Someone who blesses anyone who sneezes)
Earth Rotator (The person that rotates the Earth)
My mindmap
I decided to stick with just 4 jobs:
Dream Builder
Thought Train Conductor
Mainstream Lifeguard
Singaporean Spy
I started creating drafts for the jobs.
What my idea were for them:
Dream Builder
Idea 1: To make use of the movie Inception’s idea of a dream to let audience know the idea of a dream builder.
My initial concept part 1My initial concept part 2The optical illusions (head on view)
The optical illusions (side view)
This is where I got my inspiration from. I like the idea that a 2D drawing can trick the mind to think that it is 3D. By doing so, it became in line with my concept of turning dreams into reality.
Finalised sketch of the ideaThought Train Conductor
To breakdown thought train conductor, it’s Thought, and Train Conductor. For thought, it can be brain, thought bubble, and for train conductor, it can be train tracks or train body.
Mainstream Lifeguard
My idea is to use elements of lifeguard and put it into a sea of brands. The lifeguard objects will form my name.
Singaporean Spy
I have a very interactive and complex idea for this one. Let’s talk about the simple one first: I wanted to use a spy suitcase containing very Singaporean items to convey the idea of a Singaporean spy. The complex idea is to scrap the entire idea of conveying a straightforward message, and strip the idea of a Spy to its fundamentals.
To be a Singaporean is to have T-shirt / shorts / slippers combo. That goes the same for Malaysians. So using this idea, I intended to create a disguised t-shirt.
A tshirt is folded whenever we first see it in a store. Therefore, my idea is to fold a tshirt as if it is new from a store, and on it, it will reveal my nickname, formed out of chains of Malaysian slang. This represents the first impression of a stranger to the tshirt, a Malaysian. And this also represents my “Singaporean Spy’s” alter ego, the disguised Singaporean acting like a Malaysian, speaking their slang.
Once opened, there will be another name, my real name, this time, it is in Singaporean slangs.
Another idea is simply to use the super secret spy suitcase and use Singaporean items to fill it up.
Consultation One
I received feedbacks that the texts must be created first before the image is added in. I couldn’t really understand what that means, but I supposed I need to create something that is more straightforward and use semiotics to convey the job. The feedbacks for the jobs are:
Dream Builder
Generally alright to go the direction I am heading, but I must keep in mind not to do anything that is physically 3D. The idea of using Inception as a clue to the concept is not recommended as it didn’t take the real essence of the job title. An example given by Shirley was to use architectural blueprints, but I thought those are too literal.
To move myself forward, I decided to use the same concept of using 3D illusions. With that, I made an accurate template that I can use to trace out for other ideas:
Thought Train Conductor
The idea is generally accepted, but there are some issues too. Using just clouds and train tracks, it does not really convey the idea of a “conductor”.
Mainstream Lifeguard
The usage of objects to form the letters is not recommended. Instead, I was told to personify the letters (giving it character instead of using objects to form it)
Singaporean Spy
The idea is too interactive and did not use semiotics to convey the message. The assignment’s objective is to use visual cues to suggest the job to people. The idea was rejected, and I had to look for other ideas as this job is too hard to illustrate for me.
I finally understood the brief this time. With feedbacks in mind, I decided to change some ideas around.
I changed Dream Builder into Cloud Maker, as it would be easier to convey than a Dream Builder, using semiotics. There are very little association with “Dream Maker” to me.
I removed Mainstream Lifeguard as it is too hard to depict, too, and thought of a new job out of puns: Deep Sea Driver (not diver). I believe this would be not only easier for me, but would also be successful.
I also changed Singaporean Spy to Singaporean Breakfast Enthusiast, keeping the idea of something Singaporean, but make it easier to do, too. Idea is to use Singaporean breakfast set to form the name.
Consultation Two
Changing “Mainstream Lifeguard” to “Deepsea Driver”, but I was still stuck with the idea of using the objects to form the wordReworking the concept, I gave personality to the text by making it look like an underwater view of being in the seaUpdated Thought Train ConductorBreakfast Enthusiast
The next consultation, I had most of my concepts settled. However, there are still some changes. Most of my text are not of the same typeface, which is one of the requirement for this brief. Understanding this, I know how to move forward. With that information, I moved away from sketching drafts and tried doing the actual artwork.
In the meantime, I changed to Breakfast Enthusiast to Kaya Toast Maker, as I intend to use only the kaya toast to build the text.
Consultation Three + Final
Deepsea Driver
Deepsea driver was moved to be done horizontally, and the “underwater vibes” is done such that it will be consistent with each letter.
The look was inspired by the Finding Nemo poster
I received feedback that I was one of the few that put my text at the bottom (so I’m unique, yay?) But there are some things that still needs to be fixed. The background seemed too plain, and the texts have no texture.
I went online to see if there are any tutorials to create such effect and found a really good video that did the under sea texture really well!
Taking some references from the method, I created the final work:
That’s the end of Deepsea Driver process!
The name is made with my name “bryan”. The gradient inside the name signifies the descending depths of the ocean. The cars replace the holes in the letters that have them, which signifies that there is some driving involved in the job. The letters are made with multiplied layers over it to create the wavy effect of being underwater. The background further sets the context for the job.
Semiotics: Depth, water movement, car, bubbles, light stream, seabed
Colour Palette: monochromatic, blue to symbolise the ocean
Textures: the clarity of water and the light that stream through to indicate rough waves that refracts light
Composition: sinking, at the bottom, as though it is like moving at lower depths, floaty. Horizontal to depict the vastness of the sea
Cloud Maker
Using the template I made, I traced a cloud texture to the text. I then scanned the new sketch and added texture onto illustrator using a random cloud brush I made using a guide I found online:
The outcome:
The clouds form my initials “BLEK”, as I’d imaging an builder or maker to use their initials. The grassland creates a context for the audience to understand that these are clouds, as well as to split the page into two different colours so the 3D effect can be seen if the page is tilted to the side.
Semiotics: Cloud, shadow of cloud, and the grass to indicate that it is outdoors
Colour Palette: Analogous, calming
Textures: Cloud’s puffy texture, grass
Composition: Central and slightly to the right to enable for the 3D effect
Thought Train Conductor
I decided to add brain textures to the text to create a more obvious idea that there is some thinking involve and therefore, thought.
Image taken from https://www.yourgenome.org/stories/evolution-of-the-human-brain
I used the brain image as a reference to the texture of the text.
I did a mockup on Adobe Illustrator to have a brief look at the outcome. They looked too much like intestine to be thought of as a brain, although there are clues from the thought bubble. Feedbacks suggested me to give the font more brain texture.
A close up to the train tracks made by vectors
The Thought Train Conductor is made by using my surname, Leow. The reason is that as a train conductor, I will be addressed by my surname.
The tracks are coming from the left side, as if connected from something, and goes around the brain, weaving in and out the brain loops. With each weave, the thought bubbles comes out. This represents the train tracks being interconnected to the different parts of the brain, and with each passing, generates thoughts. This defines the thought train.
Semiotics: Train tracks, brain, thought bubble
Colour Palette: Analogous, with purple ~ pink gradient in the background as the representation of the deep, calm, inner side of our subconscious
Textures: Brain folds, train tracks, thought clouds
Composition: to create the idea of a continuous path, yet trying to not make it look boring, so I split Le and ow to make the train track route more dynamic
Kaya Toast Maker
Image taken from https://www.misstamchiak.com/traditional-kopi-kaya-toast/
I took the texture from a photo of kaya toast and did an image trace, which I then edit some of the colours individually to fit the shape of words. The green plates, the dull green kaya, and the blue table are coloured according to image references.
Image taken https://www.yelp.com.sg/biz/ya-kun-kaya-toast-singapore-9Image taken from http://www.tnp.sg/news/singapore/kopitiam-owners-say-rent-hikes-are-unlikelyKaya Toast MakerAdding some additional objects to hint that this is a kopitiam breakfast at the kopitiam, I added the coffee and condiments, as well as a vignette.
I used my surname for this composition as I imagine a kopitiam uncle would always be called by that. “AH LEOW AH, LIANG BEI KOPI!” then I’ll be like “OK”
Semiotics: Toast, condiments, kaya
Colour Palette: RGB, but I used blue as the background as with blue tables in kopitiam. The kaya is interesting as, if the colour is off by a bit, it will look like lettuce.
Textures: Kaya toast, kaya, plastic plates.
Composition: I made it look like an instagram shot taken from above so that it will bring the context of it being food.
I thought I had good Illustrator knowledge until I came across this project. The usage of Illustrator to create graphics design and texture is so different from that of creating 2D renderings of object in product design.
I also learnt to appreciate the flexibility and fluidity of Photoshop, as Illustrator did have its limitations.
Also, I learnt through the hard way to stick to the brief instead of doing what I want. As I did not understand it initially, I had to back track a lot, wasting a lot of precious days (or even weeks).
There is chance to create better works, so it was unfortunate that this happened. Nonetheless, I am excited for the next project Zine, and hope I will do better there.