Third-I: Final Post

Updates and process

After the last update, I bought the materials that I needed. Few days later, I got the opportunity to work with Galina to create the headpiece which I am really thankful for as it looked good and I wouldn’t have been able to make it if it weren’t for Galina! Thank you!

Quick prototyping with cloth

Moving to Neoprene
Final result!

Testing:

Adding eye tracking:

The eye tracking works well, but it is unstable and needs to be constantly pressed down by my hand.

After that, I kind of left the project as it is as I had to work on other projects. The lack of time really killed me here as I have overpromised the device, even though I know that they can be done. They just cannot be done within the timeframe given.

There wasn’t any new updates to the device as, conceptually, it is already perfect to me.

The Code

There are 3 codes:

1. Eye: The eye is an ESP 8266 wifi module connected with 2 3.6V lithium Ion batteries. This gives a total voltage of 7.2V that powers the servo motor, the 2 IR sensors, and the board. I think this was still too little voltage (or there was some complications) as it wasn’t making it work very well.

 

 

Auto detection mode: the eye scans for the nearest object within its range and tracks it

Auto tracking mode basically says, if one IR sensor is detecting a lower distance, then the servo will turn to that direction.

Sweep mode: motor goes left, then right, and repeat

Sweep mode basically loops the servo left and right.

Eye tracking mode: the wearer’s eye controls the movement of the eye.

Eye tracking mode (code incomplete) is basically reacting to the value sent when the participant looks left or right.

2. Head (wifi): The wifi module on the head receives information from the eye and sends to the main arduino via Wire (I2C communication).

These lines of code basically sends the state data to the eye via Wifi, using Adafruit.io (this does not work yet due to lack of time)

This is for receiving the info from the eye, which is transmitted to the main Arduino via Wire.

3. The head contains the eye tracker, vibration modules, and buzzer. The code is simple, if it receives a certain value, it will buzz a specific vibration module and ring a specific tune. Unfortunately, I don’t know why, all the vibration modules run at the same time. I had no time to troubleshoot.

Final Device

The device, as it stands now, is only a prototype. It is sad that I can’t get it to work on time. Although, I am still proud that I got the eye working, which is really cool as it really works wirelessly.

Further Improvements

  1. I will continue working on this project over the holidays to finish it, so I can be proud owning this device and have it in my portfolio.
  2. I would 3D print the eye part so it looks more polished and I can hide all the wires.
  3. Fix the eye as it keeps turning left for no reason (it keeps happening even at the beginning stage of testing)
  4. Fix the vibration and include the buzzer as I haven’t really tried using it.
  5. Try out having the eye tracking up, as that should honestly be the main part of this device, next to the movable third eye.
  6. Hide all the wires and make this thing looks cool
  7. If the auto eye movement still doesn’t work, I’ll just stick to eye tracking as it seems to be the one that makes the most sense for this concept for now.

Lessons and Reflections

I have really over-promised everything and that’s really bad. I’m sorry to disappoint everyone with my failure. The important lesson is to be less optimistic when it comes to coding. Many of my ambition comes from optimism that things will all go according to plan. Mostly, it doesn’t. Also, the worst part of coding is to make everything run together smoothly. When there’s a lot of parts, it becomes very complex and hard to handle.

Despite this, I really learnt a lot. I learnt many aspects to coding that make my project work. I will try to bring this knowledge further next time. After this, I’m keen to learn more about other ways to code which isn’t as clunky as it is in this project.

I also learnt that we are designers and not engineers, so my project should be more on the experience than the tech. I was too focused on the tech and trying to combine everything that I was overwhelmed. I should really take things one at a time.


Updates:

So I simplified everything so everything is connected by wires now. The flaw is that there is now an ugly hose that’s connecting everything together. Still, I’m proud of this!!! And this is still just a prototype and I’m sure there’s more to improve for this project. I hope I can continue this somehow through other modules or FYP… As I’m thinking of doing something of similar theme!!!

Switch

Jeff taking a look!

below are videos of it working:

Auto tracking:

Eye tracking:

Our Glass Castle is Their Grave – Final Documentation

Updates:

After the last consultation, I was told to try building the space and test physically as there is no way to find out how the experience will be like without testing. However, there was a lot of difficulty in booking a space which I will briefly talk about later. I decided to drastically reduce the size of the installation after that. This is because the whole experience is clunky in my opinion and it will require too much material. But that’s also untested.

I spent >$100 the next few days buying materials like cloths and stickers, while I booked a few equipments and pillars that I can use to fix up the installation. Unfortunately, it was near crunch time for many modules so there wasn’t time to setup and test physically. From that, I learnt that I must really start testing earlier and not keep everything in my head until its too late.

I also got senior Chris to help me film some scenes with his drone! This was done as part of my previous idea of projecting greenery vs the sunken plaza’s reflective surface, which was the scrapped. (Sorry Chris! Thanks for all the trouble :’)))) ) As I don’t want to waste the footage, I’m going to put a small part here:

After that, I stopped working on the space until I was able to which was… 1 day before the presentation.

The Birds

While everything else was happening, I was trying to passively work on the project by collecting images of dead birds. There wasn’t much that I collected, which was strange (but also means it’s a good thing that lesser birds are dying haha)

 

The Space:

There was a lot of trouble to book the space. Firstly, Bharat was always not in, so I was not able to get approval even though I have requested for the space early. When I managed to catch Bharat early November, I was told that the space was to be shared with Prof. Joan Marie Kelly, who will be exhibiting her Painting class artworks. I had to make special arrangements with her in order to secure my space (which is that I will help her class to setup the exhibition).

The Setup + Final Presentation:

The afternoon before the presentation, I started setting up. I brought the necessary equipments and logistics down.

Initial setup
After putting cloth

Unfortunately, I did not document the form which was shown during the final presentation as I did not really like it. I already intended to continue working on it after the presentation so I have the newer pictures.

Video documentation of walkthrough:

In the above 2 videos, the participants enter from the back instead of the front. This was due to me thinking that entering from the front was not a very good experience. The profs then tried entering from the other side and thought it was better.

During the discussion we brought up a few points:

  1. The photos of the dead birds could replace the blood splat which was quite cheesy and doesn’t really look good in terms of aesthetics.
  2. Going from the front is better as there is a better narrative and it is more intuitive to navigate through.
  3. The experience worked as the impact sound and the visualisation is able to show what I wanted to show. Digital implementations helped to bring the experience to the next level which was successful.
  4. There should be variations in the knocking sound which can make the experience more diverse. Also, the sound of the bird hitting window is not the same as just a regular knock.

The projection I shown is this. It’s a compilation video of people hitting against glass, but a picture of a dead bird found in ADM after each hit.

Further analysis

Overall, going in from the front is much better. Although there should be some kind of cue to let people know that they should not walk past the acrylic, and there should be something to distract them to slow them down. This was tested with participants before the final presentation, so that’s why I decided to let participants go from the back (which actually was not any better).

I realised that people usually stand there to see if there are more to the video on the monitor. I usually have to tell people to move on instead. So if possible, I should let participants know that they have to exit.

The sticker sticking part feels out of place now. It’s more of a personal touch than anything that is related to the installation. This is because the installation is experiential, while the sticker part is more activistic. I still kept it as I still want the idea of this artwork to not just “spread awareness”.

Finally, I realised that people don’t really look up to see the splat. This changed after I told participants about the concept before they experience it.

Further Improvements

After the presentation, I continued working on it after a good sleep (yay!).

I removed the area where the projection was and placed the projection in the middle of the “tunnel”. The projection is now projecting onto a piece of cloth which that will have to unveil to move on, which leads to the acrylic sheet.

This essentially halves the setup, which makes everything look less clunky.

Overall, this makes everything better in a lot of ways.

  1. Navigation was easier. It was clearer for participants to understand the flow of interaction and the narrative.
  2. The participants will now move slower in the tunnel as there is a video to watch
  3. The whole setup is more compact and less detached

However, there are still flaws that I have to address:

  1. the light in the projection makes the acrylic sheet visible and should be turned off when the cloth is unveiled (this was newly added after discovering this problem)
  2. Projection on black cloth makes it not very visible (as mentioned by my friend Clemens) and the later changes, I switched to white cloth.
  3. The visuals are still not the best, the blood splat is still very…. weird. What I did next was to add an overall red hue to make the splat less off-putting, which kind of worked in bringing attention to the screen
Clemens’s reaction
The projection kind of blinds the participants and reveals the acrylic sheet which is not good

I also created a poster that will be pasted on my installation so people will know what it is about.

Reiteration of Concept

I would like to go through one more time to summarise everything, and how all the elements worked / not worked out

The concept is from an observation of birds hitting the reflective glass window around the ADM building. Upon further research, I discovered that many birds had died due to the building’s reflective glass windows. I wanted to make an installation that solves this problem through bringing awareness to the problem, letting people know the solution, and asking the school to do something about it.

The installation features an experiential space alongside an activity. The experiential space is a long narrow “tunnel” made of white and black cloth. The  use of white cloth was intended for the space to look like a funeral. The tunnel also represents the route into Sunken Plaza.

Inside the space, the first thing to see is the video projection. This projection shows found CCTV footages of people walking into glass. Each time a person walks into the glass, a picture of a dead bird found in ADM is shown. This is done to draw reference to birds flying into glass, and I want it to stir some emotions within people. Watching people walk into glass is funny. But is it funny when you see a bird dying from that? Using that, I want to create a sense of guilt and pity. The video is about a minute long and loops.

When the audience moves on, they will unveil the cloth and walk forward. This activates 2 sensors (previously only 1). 1 sensor will turn off the video that the projector is showing, making it easier for the participants to see what’s in front of them. Another sensor will activate the bird-window collision simulation. This happens on the front, which is a monitor that shows a live video of the window behind the installation, pretending to be an actual glass window. This was inspired by an advertisement by LG and another by Pepsi, which features a screen that looked like windows to trick participants. In my installation, a sound of a “bang” is heard, followed by the screen turning red and a blood splat appearing on the screen. This part is to cue the participants into knowing that a bird has hit the glass, and this let participants understand how it sounds and feel the impact.

Once that interaction is done, the participant can leave from the side, and move on to paste a sticker to ask for change.

Here are some user testing videos:

Note: she didnt notice the video and the blood splat, but was startled by the bang.

Her rewatching the video

Lessons and Reflections

I also learnt that in an art installation, I should focus more on the experience and feelings rather than facts as that is more effective in incepting ideas into people.

I also learnt that when it comes to spaces, it does not have to actually be physical space. It can be something more experiential, which I could focus on rather than creating an entire space for people to move around in. (which is costly and hard to build)

I also learnt that I should have started building much earlier and use the building as a testing ground for me to see how the experience feels.

I also appreciate the feedbacks which are all good especially Biju’s suggestions to having the glass wall that people walk into.

However, overall, I didn’t really enjoy working on this project as it requires a lot of work and money. Setting up a space is really difficult, especially with a space that is quite large like mine. Working alone on this is just not recommended. (There was once when my setup fell and I had to shout for help and the photography people came to help me I must thank them :’) )

I also lost motivation halfway through the semester as the concept wasn’t that strong in terms of the requirements of the module. Still, I’m happy that I pushed through and the installation looks fine now. I guess larger-scale installation stuffs isn’t my thing, and I should build something smaller in future.

 


 

More Updates:

Participants viewing the video & the impact

Videos:

 

 

 

Final Project – Split Chef (Discussion)

that little icon at the top right ruined this picture

We have transcended through time and space to meet up in the third space — Skype. Joel happen to me unable to meet us, so we decided to conduct a skype meeting.

During our skype meeting, we discussed on the things that we have to iron out so we can move on to the final work.

  • We finalised the player roles and what kind of players we want

  • We decided on the kitchen location to be either Hannah’s (our friend) hall, or one of the hall that we are staying in. The backup plan is to actually use someone’s house.
  • Next, we asked if we should have audiences to watch our show, and allow them to comment to help the chef. I think it will be fun and engaging to both the audience, the players, and facilitators.
  • Next, we finalised on the recipe so we are sure of the ingredients to buy and the steps we have to take
  • We also planned out what kind of materials we exactly need to bring and tasked each of us to bring certain items
  • We also ironed out some of the other stuffs that may affect the game

So what’s what we have discussed over this week. We wanted to go straight to playing the game for real, but was unable to find a good time to conduct it since everyone is busy.

I’d say, we are ready for this!!! This weekend, we will bring and make the things we need and prepare for the game on the following Monday! Excited!!!

Final Project: Split Chef (Trial)

Intro on what we are doing

Our team (Brendan, Bryan, Dion, Joel) came up with a cook-off game that uses the third space as a form of communication. There will be 2 players, one will be drawing while the other will be buying the ingredients and cooking.

The two players are connected to each other through Instagram Live, which works similarly to Facebook live. The two can communicate through this third space. The facilitators will be the ones filming for the players so they can do their thing while still be communicating.

Firstly, the drawer will be given 7 seconds to draw the ingredient given. The second player will have a minute to find the item and add it into the basket. After buying all the ingredients, the buyer will go to the kitchen and the drawer will have 15 seconds to draw each step of the cooking process. The second player now will cook the food and the end results will be judged by the facilitators.

Trial Part 1 – The Groceries

So in this trial, Brendan and Dion are the facilitators while Joel and I are the players. I am the one drawing and Dion facilitates me while Joel is the buyer and Brendan facilitates him.

When BBQ sauce becomes Tabasco sauce (like c’mon guys that’s clearly a BBQ pit and spatula)

 

Lime! The droplets gave it away very easily
Taco wrap that looks like burger bun and Joel got it!!
Seashell pasta that became dumplings but it’s okay because this is a decoy ingredient
Onions!
Tomato x 1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Here are some screenshots of what happened. Here’re some pointers I got from this experience:

  • we initially gave the drawer 10 seconds but it was too long. We lowered it to 7 seconds so that it is still a lot of time but not too much.
  • There are some instances where the drawer need to be creative. Example (not captured in screenshots unfortunately) will be the Shiitake Mushroom. I literally drew a pile of shit and a mushroom and Joel instantly got it. It was a funny experience.
  • Joel understands a lot of the ingredients I drew which was very surprising for me! For instance, I drew the taco wrap (pic 2) and it looked like a burger bun to me. But he got that it was taco wrap for some reason. I’m guessing he guessed it based on the other ingredients that he had to buy, since he have experience in cooking. His cooking instinct made us wonder if there are other ways to team people up. (initially we wanted friend vs non-friends, but now we can consider cooks vs non cooks or something like that)
  • Joel sometimes took a long time to find the ingredient which dragged the game. We need to set a time limit for the buyer to figure out what they need to buy and find the item. This can make the outcome more interesting as, if the buyer is unable to obtain the item, the final dish will be off (and more glitched).
  • The connectivity is alright but sometimes it gets a bit pixellated

Overall, being the drawer is a very fun experience. The 7 seconds time limit and the rush to draw something meaningful to the buyer is exciting. Watching the buyer buy the food, is also fun. If the buyer gets something wrong, I laugh. If the buyer gets something right from a bad drawing, I am impressed. I think watching the buyer buy the things is also part of the fun experience as a drawer so it’s fine and not so boring.

Trial Part 2 – The Cooking

I wasn’t around during this part as I had to rush off to somewhere. Zhen Qi is very kind to help take over me so we can continue the game.

 

Unfortunately I can’t say much as I wasn’t around, but I can tell that connectivity is a big issue. The pantry isn’t well connected by wifi or data, so the live feed keeps getting cut off. I also feel that there is a lot of waiting time for the drawer while the cooker is cooking.

I feel that the drawings are okay, that the time frame given to draw is enough for the drawers to express the instructions properly.

The outcome is delicious (according to the rest of the group members). The Tabasco sauce created an interesting taste, so the glitch wasn’t that unpleasant. It actually enhanced the dish.

Videos of the cooking process: