Wearable Tech Final Project

Butterflies

My fashion concept revolves around the visualisation of the feelings of anxiety. Anxiety is sometimes described as the butterflies in your stomach and I wanted to work with butterflies on my outfit. For the dress the interactivity was meant to show the phases of being anxious. The first being where the butterflies come alive under the use of servo motors, loud like the blood rushing in your ears. The second being the calm of overcoming that anxiety, where the butterflies calm, and there is light.

The idea of the dress was supposed to be dark, representing the things lurking in the shadows and you cannot really see them, hence I chose of black fabric. Eventually though to liven up the dress, I had chosen some red accents so as everything is not just one dark mess.

Technology:

For the circuit I separated the interactivity into two circuits.

The first is the servo motor where its like a pulley system to pull the butterfly wings up, and gravity brings it down.

The second state is the lights which pulse like a heart beat. I felt that it would be nicer as a pulsing light as it is softer and calmer.

I had thought for if this project exist on its own it will either respond to heartbeat with a heartbeat sensor. For the fashion show I wanted to do a collaboration with Pei Wen, where when her dress comes close then it would trigger the second part of the dress where it lights up, kind of representing the comfort a close friend gives, which results in the calm.

Improvements:

I need a bit more support for the butterflies. As it relies of gravity to let the wings fall, it has to perch horizontally rather then on the side of the wall.  So I would like to either remake the butterfly system or make a kind of support.

I would also like to conceal my wires and the butterfly stand better, by maybe making cloth flowers to conceal it.

I would also like to expand the butterfly circuit to the hat portion, so hopefully I can cut or find another way to make more butterflies.

 

For the Fashion Show:

My role was originally to help out with the photo shoot, I didn’t really think about it too much because the role sounds straightforward enough. Um, I could help out with the other areas too if help is needed.

Wearable Tech Research: Biomimicry

What is Biomimicry:

To copy the functionality of nature to solve our man-made problems.

Examples:

Stefanie Nieuwenhuys

LAYERING SCRAPS LIKE SCALES After spying diamond-shaped wood chips on a workshop floor at London’s  Kingston University —the leftovers of some architecture student, no doubt— Stefanie Nieuwenhuys was reminded of a secondhand snakeskin bag she once purchased. Scooping them up, the fashion student set to work, layering the wooden scraps onto fabric like reptilian scales. To minimize waste, Stefanie Nieuwenhuys layered discarded pieces of wood onto fabric like reptilian scales. Nieuwenhuys’s “aha” moment resulted in her master’s project: a collection of corsets, floor-length evening dresses, trousers, and neckpieces that marries modern laser-cutting techniques with a couturier’s delicate yet exacting touch. Eschewing virgin resources, Nieuwenhuys worked with bio-waste firm  InCrops Enterprise Hub  in Norwich to obtain discarded pieces of plywood, which she honed into efficient forms that left behind little waste. Glued onto unbleached organic cotton, the brown-and-ecru “scales” become a “simulacra of nature, without discarding nature’s inherent harmonies".

scales4

scales7

scales8

(pictures taken from: https://blogs.3ds.com/fashionlab/stefanie-nieuwenhuyse-recycle-le-bois-comme-des-ecailles-serie-biomimetisme/)

After spying diamond-shaped wood chips on a workshop floor at London’s Kingston University—the leftovers of some architecture student, no doubt—Stefanie Nieuwenhuys was reminded of a secondhand snakeskin bag she once purchased. Scooping them up, the fashion student set to work, layering the wooden scraps onto fabric like reptilian scales.

The artist makes use of scrap material to make her outfits. This project of hers emphasises the idea of reusing materials. Laser cutting the pieces to look like scales, and imitating the layering to look like that of a snake.

 

Diana Eng

COMPACT STRUCTURES THAT UNFURL LIKE LEAVES Diana Eng based her  “Miura Ori” scarf  on an origami “leaf-fold” pattern invented by Koryo Miura, a Japanese space scientist who was in turn inspired by the unfurling mechanism of the hornbeam and beech leaves. Diana Eng’s scarf folds into a compact package yet “deploys” to create a voluminous wrap for your neck. Hornbeam and beech leaves are distinguished by their corrugated folds, which remain collapsed until they emerge from their buds.

Diana Eng based her “Miura Ori” scarf on an origami “leaf-fold” pattern invented by Koryo Miura, a Japanese space scientist who was in turn inspired by the unfurling mechanism of the hornbeam and beech leaves.

The origami patterns were made by observing nature, and the omission of right angles, like forehead wrinkles or the veins of a dragonfly’s wing. Because of that, the pattern is collapsible.

Monserrat Ciges

Created to imitate animals that are able to voluntarily self-transform.

 

References:

Simone Leonelli on the Blurred Boundaries Between Art & Fashion

The influence of 3D printing on fashion design

3D Printed Fashion: Novelty or reality?

How to become an Aussie eco-fashion designer

Fashion Biomimicry Through The Lenses Of Biotechnology, Nanotech And Artificial Intelligence

3D Haute Couture by Iris Van Herpen

http://www.osmosis-industries.com/digital/2015/4/21/nature-inspired-fashion-design-through-the-theory-of-biomimicry

Stefanie Nieuwenhuyse Reuses Scrap Wood as Scales – Biomimicry Series

Tessellation and Miura Folds

http://www.fairytalefashion.org/

https://class.textile-academy.org/2019/Montserrat/project.html?fbclid=IwAR2HGn6Jnj_R55DxHrH0XUJ-Kps8XhIIsjuXEe7a-0vZX_qN_RzgdFmpEQQ

 

Wearable Tech Research: X.Pose

Designed by Xuedi Chen and Pedro G. C. Oliveira
Photographed By: Roy Rochlin
Model: Heidi Lee
Makeup: Rashad Taylor

Concept:

The outfit reflect the amount of data you generate when using the internet. Based on the amount of data generated, it will make parts of the outfit more transparent then the others. It creates a commentary of how transparent one is on the internet despite having things like privacy settings. On Xuedi’s website they states,

By participating in this hyper-connected society while having little to no control of my digital data production, how much of myself do I unknowingly reveal? To what degree does the aggregated metadata collected from me paint an accurate portrait of who I am as a person? What aspects of my individuality are reflected in this portrait?

The work broadcasts the artists’ concern on the wearer, exposing the wearer literally to the public view as contrast to the exposed data we have online.

Process:

  • Creation of app to connect to the outfit
    • The app is used to collect the data generated from their phone at each time at every location. The data generated is sent via Bluetooth from their phone to a Bluetooth Arduino in the outfit.

schem001.jpg

(photo taken from: https://www.pedro.work/#/xpose/)

  • Creation of Flexible mesh armature
    • Designed geometrically to reflect areas that the artists visit often. (like Soho, NYU, Union square.
  • Opacity of the dress
    • Depending on the amount of data generated, the outfit would change opacity accordingly.

Materials:

According to the artists’ website, the material used for the armature of the outfit is a flexible 3D printed mesh

schem002.jpg

(photo taken from: https://www.pedro.work/#/xpose/)

Subsequently, they also mentioned that the opacity changing material is made from electrochromic film, also the materials used to make smart windows.

Photo: Electrochromic glass wired to electric contacts and appearing transparent (clear).Photo: Electrochromic glass wired to electric contacts and appearing opaque (dark).

(Photos taken from: https://www.explainthatstuff.com/electrochromic-windows.html)

Image result for smart windows lrt

(similar to the material that is used for our LRT windows)

Electrochromic films use technology similar to an LCD display, which uses liquid crystals, under precise electronic control, to change how much light can get through. When the current is switched on, the crystals line up like opening blinds, allowing light to stream straight through; switched off, the crystals orient themselves randomly, scattering any light passing through in random directions, so making the material turn opaque.

How electrochromic smart film works: Animated GIF artwork showing how liquid crystals align to let light pass through

(Photo taken from: https://www.explainthatstuff.com/electrochromic-windows.html)

 

References:

  • http://xc-xd.com/x-pose
  • https://3dprint.com/5802/x-pose-3d-printed-dress/
  • https://www.explainthatstuff.com/electrochromic-windows.html
  • https://www.pedro.work/#/xpose/

Interactive Device Project2: Skitzie the Cat

Skitzie the Cat is just your average black cat that likes to hang out on your shoulder. They are curious and like to people watch while you do your stuff. But Skitzie is very shy, hence pretends to be a scarf when anyone comes too close.

(insert hooman wearable sketch)

About Skitzie the Cat

Skitzie is a guardian for those who are not to aware of their surroundings. In a sense Skitzie’s ‘hasty retreat’ to become a scarf is a warning that there are on coming people approaching.

For this project, I had imagined Skitzie to be able to move their head and their ears to see the world. Skitzie is also envisioned to be able to ‘blink’ through LEDs and hum through a speaker. I wanted there to be sound or light as an indicator to the person who is wearing Skitzie to know very clearly when Skitzie is a cat and when they are pretending to be a scarf. The Warning has to be distinct enough to catch people’s notice.

Skitzie’s hardware

Skitzis is a combination of servo motors and a sharp

  1. Testing the Servo Motor:

Website Reference: https://www.instructables.com/id/Arduino-Servo-Motors/

2. Testing the Proximity sensor

Sharp ir arduino 0

Website Reference: https://create.arduino.cc/projecthub/jenniferchen/distance-measuring-sensor-900520

I combine the circuits and then made a head.

Testing out the Eyes circuit, it works. Turns off and on depending on the closeness.

For Some odd reason though when i add the ears, the eyes disappeared.

Then it got fixed (connections are problematic, check everythingggg).

This is the body, that I made around the head servo motor.

All Assembled.

Reflections:

Honestly the aesthetic of Skitzie didnt come out right, which I am a little bit disappointed by. Subsequently the head keeps falling off if left for too long, so I need to fix that in future. Hopefully we will see the return f a better Skitzie in future.

 

 

Refernces:

  • https://www.google.com/search?q=code+servo+motor+with+sensor&oq=code+servo+motor+with+sensor&aqs=chrome..69i57j0.11575j1j4&sourceid=chrome&ie=UTF-8#kpvalbx=_VbKEXdWDPNjc9QPCsoCIBA25

Final Project: I tried to be a Musician

Looking at my final project, I was rather sceptical about what it is i wanted to do, as I was out of ideas.

I spent quite a lot of time thinking about what I wanted to do, and realised looking for ideas just got me more and more distracted from actually finding an idea. Hence I decided to build on that feeling of wasting time and made a product that ‘wastes other people’s time’.

The entire project ‘I tried to be a musician’ is kind of looking at the idea of people doing things for fun, finding ‘useless’ talents that seem entertaining but at the end of the day, there is not much value added to the experience.

I kind of remembered some people making music videos with Calculators and the squeaky chickens (help, I never realised they are made by the same person). The kind of music videos where you look at them and realise ‘wow, you are so talented’ and immediately after ‘where the heck do you get the time to do this sort of things’.

So I decided lets make a musical instrument too, something easy to understand and grasp and make a jumble of sound, but hard to actually make something decent, to prompt others to try harder to make the product work for them, or give up instantly after understanding that it is going to be a waste of time.

I guess this project is really just to emulate that idea of having fun trying to solve a problem, but at the end of the day, you are just wasting time having fun. (Is that considered wasting time? who knows?)

The Making:

So the components of the project are relatively simple. It is a combination of two simple circuits, one for an ultrasonic sensor, and another for a buzzer:

Source: https://howtomechatronics.com/tutorials/arduino/ultrasonic-sensor-hc-sr04/

learn_arduino_fritzing.jpg

Source: https://learn.adafruit.com/adafruit-arduino-lesson-10-making-sounds/playing-a-scale

I found that by using a buzzer I technically can code for the entire keyboard is i find the list of numbers associated with the tone. By allocating a set distance the ultrasonic sensor and my hand, I essentially have a no-touch piano keyboard. Yay.

Taking it a step further, because why not? I duplicated the code by three, splitting them according by octaves. We only have two hands so I wish you luck trying to learn how to play this annoying child >:D

Pictures:

The setup for a single set of buzzer and ultrasonic sensor pair.

The there is the external pieces.

I first laser cut my pattern out:

 

Then after that i Spray painted the pieces:

Videos:

So Far there is two:

The range of the sound seems to similar i need to increase the range to make it more interesting.

Artwork Review: Inter-Mission

The lapse project is a collection of five projects, all revolving around the theme of lapses in time, in memory and in realities. Whilst not being able to physically be in the preence of these art works, reading up on it I was able gather an impression of it.

The projects:

  • VR Lapse
  • Particle Lapse
  • 24 hour Lapse
  • Panorama Lapse
  • Journal Lapse

Within the projects the last two, Panorama Lapse and Journal Lapse are not interactive, hence I will not be discussing them.

VR Lapse is a virtual reality simulation, bringing the audience to Singapore’s oldest colonial building, only to find out it is digitally erased.

Does Out of Sight, Out of Mind in Singapore leads to Nevermind?

Quoted from popspoken.com, during their interview with inter-mission shares the artists concern with how significant Art related artefacts in Singapore are slowly being washed away with the ever changing landscape.

With that message in mind, I wonder if the project works on someone with no context of the place at all. It is true that these are cultural landmarks, however I am left drawing a blank when someone tells me ‘Art House’.  They were trying to trigger this idea of misplacement, the ‘I am pretty sure there is something missing here’ sort of thought, but if there was no recollection of the place in the first place, can this idea still be drawn out? Does that hinder the experience of the work.

Subsequently since we are discussing the idea of interactivity of a work, I feel that the interactivity level is quite  low. Being placed on empty, unchanging landscape, with nothing to influence, is passive like watching a movie or a slide show.

The second project adds to the atmosphere of the first. Particle Lapse is more interactive in a sense that it is using the movement of the viewer and creating a feedback sound/atmosphere for the audience who is traversing the virtual world, giving them the extra dimension of sound that is meant to confuse the audience. In this case there is a contributive element that the audience plays in the artwork.

Finally there is 24 Hour Lapse which is an installation where visitors from the past 24 hours are projected alongside the present visitors on a CRT monitor. It is kind of interesting how they play with the idea of people from two differnt times sharing the same space, even if it is only a screen. However in terms of interactivity, it is again quite passive as the present visitors cannot influence the already pre-recorded video.

Overall the Lapse project is not a very interactive project. It works more as a stage which is the artists mind, and the audience is the audience, not the participants on the stage.  As such we only view their feelings and experience for the idea of lapse in memory, which is not always universal hence abit hard to relate to.

Interactive Art Research: Rain Room

‘Rain Room’ by Random International featured in the Museum of Modern Art, New York in 2013. It makes use of a 100 square meter room full of falling water simulating rain, and 3D tracking cameras to capture the motion of the visitors passing through the room. By doing so would stop the ‘rain’ fall above that peticular area and create a pathway for them to cross.

The work replicates the sound and the smell of the rain, creating a sort of white noise that encompasses you along with the rain. It sort of reflects this relationship between human and nature, which is subsequently getting regulated with technology. How contrary it is that people would stand and simply contemplate in this artifical downpour vs fleeing the actual downpour.

What I find particularly interesting about this project is the artist statement of creating this room. They said that they had created the room with no preconceived idea of what kind of reaction they would draw from the audience experiencing their work. In a sense that unredictability of reaction itself becomes part of the artwork.

“DON’T RUN!” exclaimed a Museum of Modern Art press rep, as a young woman who had entered the field of falling water in Rain Room, 2012, began to take flight and was promptly soaked.

As quoted from artforum.com’s review of the work, after a guest had out ran the motion sensors, temporarily glitching the system and got drenched from the work not stopping the rain for her. It is amazing how this ‘carefully chereographed downpour’ still had the ability to instill that same instincts humans have in the faces of an actual downpour in some, however bring out a contemplative peace in others.

Video:

Final Project, life update

From last Week’s flow chart, En Cui and I worked on creating a mock up circuit which follows a segment of the flow chart.

Mock Up model 1

We started with a basic microphone set up.

From here we tested to see if we can get the input reading through the serial monitor of the surrounding sounds, and the changes when we spoke into the microphone.

Problems:

  • Serial Montitor showed that the input was either a ‘high’ at 1022/1023, or a ‘low’ 0.

Conclusion at this segment:

  • We thought our microphone was iffy

Nonetheless we continued, as the microphone was still able to detect sound we decided it will be good enough for now and we will solve this issue later.

Mock Up model 2

Subsequently, we added onto the first model to include the LED output.

From here the code was expanded to include a code to control an RBG LED and to read frequency and Volume of the surrounding environment. Initially, the code was done in  random way where for every 3 digits that the frequency had the digit in the hundred place would be the percentage of red, tens the percentage of blue, and ones for green, that would make up the colour that the light bulb would create.

Watch Video at:

Problems:

  • The colour of the lightblub was coming out abit too randomly

So from there we attempted to group a range of frequencies and match them to a colour. Subsequently we made it such that the volume is matched to the brightness of the LED.

Body Storming

 

After our short dicussion, En Cui and I have decided to combine the ideas of the talking door and the concept of gaps between multiple conersations to create an interactive hat. The idea of the hat was to create both a visual (Different coloured LEDs for different pitch and/or volume) and audio output whenever someone spoke. The idea was that it would let out a different sound depending on the pitch and volume it sensed from the surroundings, meaning that it will consider the environmental sound as a whole.

Watch the video here:

 

What did you learn from the process?

From this process we have learnt that our concept is hard to connect with the audience, so we should make it more diresct. Though the idea of using the object is fairly simple as it is what it is, which means the idea of a found object is really strong enough to have the audience interact with it without giving much instructions. The reactions can be learnt along the way. We should also make it such that what ever the reaction is should be within the view of the participant, as the lights are on top of the hat. Also our project is very context driven as it relies a crowded, noisy area to link to our concept of the gaps in between conversations.

What surprised you while going through the process?

Shout out to our Tester who is especially cooperative :3c. There was a lot of confusion trying to link the project to its concept, it is not directly understood as an individual or group concept, but I guess that is what happens when you only have one tester and your project responds to them whether in a group or not. The idea of the hat was for portability but we have the idea that it will react to its environment no matter if it is worn or not, this results in some comments that we might want to change the shape it takes. We are also worried about how to convince the audience whether they can grab the object freely.

How can your apply what you have discovered to the designing of your installation?

So we might consider changing the appearance of the artwork. We might tweak the message a bit, and maybe have multiple small things instead of one big thing, to make it less intimidating. Also Lei said we can use P5.js  to do speech to text, we are kind of bombarded with endless possibilities now lol.