Material Cookbook || Chapter 1: Direct and Indirect Pattern Transfer

Materials Needed:

  • Transfer Ink for Fabric
  • polyester material, the silky kind
  • Paper
  • Fabric crayons
  • wax paper
  • Iron/Heat press
  • Flat items (Feathers, string, lace, etc)

Instructions: Dry transfer – Direct

 

Step 1: Draw a pattern or image on paper using the fabric crayons.

Step 2: Line the paper on top of your chosen material, image side down.

Step 3: Place a sheet of wax paper to protect your cloth from direct heat from the iron. This will prevent the cloth from burning or warping.

Step 4: Iron the stack. Note that the Colour will start to seep into the cloth when it is ironed properly. It also takes awhile for the image to transfer so be patient and check it constantly without moving your image.

 Reflection:

Heating with an iron is uneven and I removed the heat too early on from some parts of the fabric which results in fainter imprints.

Polyester burns and shrinks under high heat. Also the silkier the fabric, the more vibrant the colours.

Yellow on paper is very bright whilst purple on paper is almost black, yet transferring over to the cloth lightens the colours.

 

Instructions: Wet transfer – Indirect

Step 1: Paint your paper with Transfer Ink and leave it to dry

Step 2: Arrange your flat objects on top of your chosen cloth.

Step 3: Line the dried paper with transfer ink on top of your chosen material, coloured side down.

Step 4: Place a sheet of wax paper to protect your cloth from direct heat from the iron. This will prevent the cloth from burning or warping.

Step 5: Iron the stack. Note that the Colour will start to seep into the cloth when it is ironed properly. It also takes awhile for the image to transfer so be patient and check it constantly without moving your image.

Reflection:

The flat objects like string move around a lot when checking, be careful when lifting the sheets.

the transfer ink also takes longer then the crayons for heat transfer, be patient.

Instructions: Wet transfer – Direct

Step 1: Paint your flat objects with transfer ink.

Step 2: Arrange your flat objects on top of your chosen cloth, painted side down.

Step 3: Place a sheet of wax paper to protect your cloth from direct heat from the iron. This will prevent the cloth from burning or warping.

Step 4: Iron the stack. Note that the Colour will start to seep into the cloth when it is ironed properly. It also takes awhile for the image to transfer so be patient and check it constantly without moving your image.

Reflection:

In class example done by Galina. 
The positive imprint is an example of direct wet printing.

I did not have the opportunity to play with this as my string was too flimsy to make a second imprint. The ink picked up from the first wet transfer print is enough to transfer a detailed enough pattern to the cloth as witnessed in the class example.

 

 

 

 

Wearable Tech Final Project

Butterflies

My fashion concept revolves around the visualisation of the feelings of anxiety. Anxiety is sometimes described as the butterflies in your stomach and I wanted to work with butterflies on my outfit. For the dress the interactivity was meant to show the phases of being anxious. The first being where the butterflies come alive under the use of servo motors, loud like the blood rushing in your ears. The second being the calm of overcoming that anxiety, where the butterflies calm, and there is light.

The idea of the dress was supposed to be dark, representing the things lurking in the shadows and you cannot really see them, hence I chose of black fabric. Eventually though to liven up the dress, I had chosen some red accents so as everything is not just one dark mess.

Technology:

For the circuit I separated the interactivity into two circuits.

The first is the servo motor where its like a pulley system to pull the butterfly wings up, and gravity brings it down.

The second state is the lights which pulse like a heart beat. I felt that it would be nicer as a pulsing light as it is softer and calmer.

I had thought for if this project exist on its own it will either respond to heartbeat with a heartbeat sensor. For the fashion show I wanted to do a collaboration with Pei Wen, where when her dress comes close then it would trigger the second part of the dress where it lights up, kind of representing the comfort a close friend gives, which results in the calm.

Improvements:

I need a bit more support for the butterflies. As it relies of gravity to let the wings fall, it has to perch horizontally rather then on the side of the wall.  So I would like to either remake the butterfly system or make a kind of support.

I would also like to conceal my wires and the butterfly stand better, by maybe making cloth flowers to conceal it.

I would also like to expand the butterfly circuit to the hat portion, so hopefully I can cut or find another way to make more butterflies.

 

For the Fashion Show:

My role was originally to help out with the photo shoot, I didn’t really think about it too much because the role sounds straightforward enough. Um, I could help out with the other areas too if help is needed.

Interactive Device Project2: Skitzie the Cat

Skitzie the Cat is just your average black cat that likes to hang out on your shoulder. They are curious and like to people watch while you do your stuff. But Skitzie is very shy, hence pretends to be a scarf when anyone comes too close.

(insert hooman wearable sketch)

About Skitzie the Cat

Skitzie is a guardian for those who are not to aware of their surroundings. In a sense Skitzie’s ‘hasty retreat’ to become a scarf is a warning that there are on coming people approaching.

For this project, I had imagined Skitzie to be able to move their head and their ears to see the world. Skitzie is also envisioned to be able to ‘blink’ through LEDs and hum through a speaker. I wanted there to be sound or light as an indicator to the person who is wearing Skitzie to know very clearly when Skitzie is a cat and when they are pretending to be a scarf. The Warning has to be distinct enough to catch people’s notice.

Skitzie’s hardware

Skitzis is a combination of servo motors and a sharp

  1. Testing the Servo Motor:

Website Reference: https://www.instructables.com/id/Arduino-Servo-Motors/

2. Testing the Proximity sensor

Sharp ir arduino 0

Website Reference: https://create.arduino.cc/projecthub/jenniferchen/distance-measuring-sensor-900520

I combine the circuits and then made a head.

Testing out the Eyes circuit, it works. Turns off and on depending on the closeness.

For Some odd reason though when i add the ears, the eyes disappeared.

Then it got fixed (connections are problematic, check everythingggg).

This is the body, that I made around the head servo motor.

All Assembled.

Reflections:

Honestly the aesthetic of Skitzie didnt come out right, which I am a little bit disappointed by. Subsequently the head keeps falling off if left for too long, so I need to fix that in future. Hopefully we will see the return f a better Skitzie in future.

 

 

Refernces:

  • https://www.google.com/search?q=code+servo+motor+with+sensor&oq=code+servo+motor+with+sensor&aqs=chrome..69i57j0.11575j1j4&sourceid=chrome&ie=UTF-8#kpvalbx=_VbKEXdWDPNjc9QPCsoCIBA25

Final Project: Trio

One musician makes a Solo.

Two musicians make a Duet.

Three musicians make a Trio.

So lets make a Trio.

Three is a weird number, we only have two hands after all, so to have three sensors you would need at least another person to make sure all sensors are occupied to control the sound that is made (of course the preferable number is three, but I do so wish to see two people flailing around).

Why did I create trio? I guess it was more along the lines of finding a project that seems fun to interact with, but at the same time you struggle to make it work for you. At the end of the day the device is really just a commentary of how sometimes in life while you are trying your best to make something work, the end result does not bear fruit, or even better, bears fruit but there is not really a take away to this situation now is there?

In summary, let me waste your time.

The Circuit:

Life update:

Initially in the previous post I had made my code entirely out of Piezzo Buzzers and ultrasonic sensors, made to beep in a set tempo. Which, to be honest, is not really what I wanted. I want the tempo to be set by removing the object in front of the sensors face. But after editing the code I realise a major issue: the ultrasonic sensor is kind of being influenced by the buzzer it self.

(insert confused noises) It was steady before? But that was when the ultrasonic sensor only needed to play one tone with an interval in between, hence meaning it was not reading the environment every second hence it does not get influenced before.

So at this point it was time to consider changing my components.

Sharp ir arduino 0

Image taken from: https://create.arduino.cc/projecthub/jenniferchen/distance-measuring-sensor-900520

So I tried out the Sharp infrared/proximity? sensor. It goes by distance sensor at Continental.

Putting the entire circuit together, a single Piezzo buzzer and an Arduino together, the circuit works.

However, duplicating the code becomes a bit more troublesome. As it turns out, you cannot fit 3 piezzo buzzers to a single Arduino and expect them to ‘sound’ at the same time (Trouble shooting this the night before and realising you have to duplicate the circuit and not the code is really bad for your health by the way).

I did not account for this tripling of the space needed to store my three Arduinos, three distance sensors, three piezzo buzzers and three power packs. It is a super tight fit, mind you. By right I can squeeze everything into the box,. But by left squeezing very thing in the night before the showcase and then snapping something is not an ideal situation.

Piecing them together:

From the last post I showed that I laser cut, filed and spray painted my pieces, accordingly:

Since that time, I had to redo the black pieces, to fit the Sharp Infrared Sensor instead of the Ultrasonic sensor. Hence cut a rectangular piece instead.

I have also cut some wedges to support the structure.

And I stacked all my pieces up and glued them together with a glue gun.

Finally I stuck my components inside the surface.

Reflections:

In this case I have achieved what I wanted to in this project, the three sensors play according to the distance of them and the object. However maybe in future I would like to be a bit more ambitious/annoying and make the circuit for more (time to make a symphony for one then).

Of course there are a bit more things I should have accounted for, like the size of the circuit, or the potential increase of the circuit.

Final Project: I tried to be a Musician

Looking at my final project, I was rather sceptical about what it is i wanted to do, as I was out of ideas.

I spent quite a lot of time thinking about what I wanted to do, and realised looking for ideas just got me more and more distracted from actually finding an idea. Hence I decided to build on that feeling of wasting time and made a product that ‘wastes other people’s time’.

The entire project ‘I tried to be a musician’ is kind of looking at the idea of people doing things for fun, finding ‘useless’ talents that seem entertaining but at the end of the day, there is not much value added to the experience.

I kind of remembered some people making music videos with Calculators and the squeaky chickens (help, I never realised they are made by the same person). The kind of music videos where you look at them and realise ‘wow, you are so talented’ and immediately after ‘where the heck do you get the time to do this sort of things’.

So I decided lets make a musical instrument too, something easy to understand and grasp and make a jumble of sound, but hard to actually make something decent, to prompt others to try harder to make the product work for them, or give up instantly after understanding that it is going to be a waste of time.

I guess this project is really just to emulate that idea of having fun trying to solve a problem, but at the end of the day, you are just wasting time having fun. (Is that considered wasting time? who knows?)

The Making:

So the components of the project are relatively simple. It is a combination of two simple circuits, one for an ultrasonic sensor, and another for a buzzer:

Source: https://howtomechatronics.com/tutorials/arduino/ultrasonic-sensor-hc-sr04/

learn_arduino_fritzing.jpg

Source: https://learn.adafruit.com/adafruit-arduino-lesson-10-making-sounds/playing-a-scale

I found that by using a buzzer I technically can code for the entire keyboard is i find the list of numbers associated with the tone. By allocating a set distance the ultrasonic sensor and my hand, I essentially have a no-touch piano keyboard. Yay.

Taking it a step further, because why not? I duplicated the code by three, splitting them according by octaves. We only have two hands so I wish you luck trying to learn how to play this annoying child >:D

Pictures:

The setup for a single set of buzzer and ultrasonic sensor pair.

The there is the external pieces.

I first laser cut my pattern out:

 

Then after that i Spray painted the pieces:

Videos:

So Far there is two:

The range of the sound seems to similar i need to increase the range to make it more interesting.

Red Box, Blue Box: a Mini Project

Red Box, Blue Box is just the start of a bunch of interactive acrylic boxes that light at the touch of each other.

The idea of Red Box Blue Box was meant to be of boxes of different shapes and sizes with the ability to connect and influence each other at any side. However due to the time restriction of this project Red Box and Blue Box are the only two to make an appearance.

The humble origins of the boxes

The two boxes begin their humble beginnings as a pair of LED strips attempting to light.

At the beginning I never thought of programming an Arduino per Box due to my lack of knowledge. However eventually, I did manage for two.

The starting stages I have used Aluminium foil switches, as I wanted the circuit to touch and be closed, bringing out the idea of contact instead of a button. However there were some ideas of using a pair of magnets instead of aluminium foil, hence I changed the switch to be made of magnets. Hence I prepped the shell of the box to have two holes for where the magnets will be slotted, not wanting to breech the surface for aesthetic reasons.

I drew out a circuit halfway, only to realize later on that is not how a circuit worked. Subsequently, not only the circuit, I realized I was not using the magnet switches properly. It turns out that magnets despite being able to attract its counter part across two 4mm acrylic pieces, does not actually transmit code through (Sad non-physics student Elizabeth learns the hard way, thank you Dan Ning for the physics lesson).

But alas, the first method of testing Aluminium foil actually prove to be the most efficient method of transmitting code. Hence resulting in the final product.

I Started looking for other circuits to reference, one being connecting two arduinos as such:

Picture of How to Connect Them

But I realised the accompanying code was more reliant on one arduino then the other, which is not what I am looking for.

In the end, I was referencing this circuit.

And creating a simple button circuit

I repeated the circuit for six sides of the hexagon.

Red Box Blue Box in action

 

Final Life Update: Hear/Here (Colours of the Wind)

Video:

Short Essay:

This project revolves around the idea of the gaps between noise/sound, hence we created a portable device that will sample the overall surrounding sound and in response would light an LED in a corresponding colour. The colour is based on a calculation where ‘red’ is volume , ‘green’ is pitch (regardless of octave) and ‘blue’ is pitch (exact octave). Red and Blue were scaled to fit a range of 0 to 255, however, for the Green there were 5 ranges created, skewed accordingly so that the range for a humanly possible pitch is larger then a not humanly possible pitch. The code makes use of an array to store data in each pixel, until all nine pixels have been used up, then the information would be overwritten for the following pixel.

References for the code:

  • Origin of basic-ass code (which is no longer here): https://www.teachmemicro.com/arduino-microphone/
  • Origin of getAmplitude code: https://learn.adafruit.com/adafruit-microphone-amplifier-breakout/measuring-sound-levels
  •  Origin of getFrequensea code: https://www.norwegiancreations.com/2017/08/what-is-fft-and-how-can-you-implement-it-on-an-arduino/
  • Origin of NeoPixel code: https://learn.adafruit.com/adafruit-neopixel-uberguide/arduino-library-use

 

Our work takes reference to works like ‘Pulse Index’ by Rafael Lozano. It is similar in the sense that it takes record of the viewers in put, in their case the thumbprints, in our case sound, and record it on a visual plane to show the changes overtime.

Rafael Lozano-Hemmer, "Pulse Index", 2010. "Time Lapse", Site Santa Fe, New Mexico, 2012. Photo by : Kate Russel

Characteristics of Interface:

Classification of interface:

Our project falls under ‘User is one of Many’ and ‘User is valued’. Our project values the unity of the environmental sound and how your sound is captured in this collective and you cant discern what is your sound and what is the environment, hence the user is one of many part. However, the user is valued is also present in a way that they are the anomaly that created the most change when they interact with it directly.

Characteristics of interface:

Our project falls under ‘Monitered and reflected experience’ as well as ‘Intuitive selection/results relationship’. For the former, the device is to collect the environmental sound and show a colour represnetation, hence all interatctions are copied and shown directly based on the sounds that you make. The latter is true as when you see the light changing to sound, the viewers will automatically try to interact with it to see the extent that it will change to, hence creating the result of trying to find the gaps between the sounds you make when you see the different coloured representations of each instance of sounds made.

Structure of Interface:

Based on the flow chart, our Project complies to everything except the last one ‘Linear Selection’. The first idea of open structure is seen in the way we made our device portable. The second idea of ‘Feedback provided’ is done so in the form of LED lights lit in accordance to the sound of the environment/people within the environment interacting with it. The third idea is ‘Constant elements providing continuity’, since the set up is designed to reflect the sound at every (how many seconds). Finally selections are recorded in nine LED pixels, showing 8 seconds of the recently past environmental sounds.

(Liz finally answered the question yay)

Who did what:

The coding for this project was done by En Cui and the physical fabrication of the device was put together by me (Elizabeth) (but you know in the end Liz kind of screwed up alot of the soldering and stuff and needed En Cui and Lei’s help to put them together. Thank youuu)

Process:

From the initial stage of mannually making LEDs light up by pressing the buttons whenever someone made a sound we created a circuit where the LED would light up in a certain colour according to the environmental sound.

After that we used this circuit as a a reference and moved from a single RGB LED to a strip of LED wire. That way we could create a set up where the colour of a certain period of time could be recorded and compared to the pervious period of time.

yay the LED lights up.

Measuring the length of wire for the glove.

This is where problems started surfacing on the soldering part so there was a redo. (soldering wise and circuit wise sob)

Testing out the Circuit.

Yay it’s done.

After Review:

Everyone reacted to the work as we hoped they would despite only having two participants. They crowded and tried to put in their own input by making noises around the two. Though we have coments that the feedback is not fast enough to show the exact inflection of voice as one is speaking, hence not very obvious. We forgot to mention this during the review, but the delay is also constrained by technical limitations. If we reduce the delay, we will need more LEDs to represent the same amount of time, and the Arduino memory overloads at 13 LEDs. Additionally, even at delay(0), the Arduino still cannot function fast enough to get the desired result:

As a result of the delay, our theme in this work might not be very obvious to the viewers to pick up on as a result. The eventual solution may thus be to use something with more processing power.

There are comments on how they are working very hard to satisfy the device as well. Some say that it seemed like a prop for band or choir performances, or a tool for training how to get the exact pitch.

Summary Reflection:

EC needs to actually know when it’s not possible than maybe possible.

Liz should not be so innovative. Liz is just not good with technology.

We should have thought out the final form better.

Extended Concluding thoughts (if you want to read about our woes):

En Cui’s Reflection:

Concept-wise, the challenge was that the core concept and form were not well-aligned. While we talked out several issues, there’s still the challenge of the interstice being unclear. But I think, in the end, the clarity of the message depends on how you interact with the wearable. For example, the distinction is much clearer if you experience the wearable in multiple contexts, than just one.

Regarding the code and circuit, it was actually mostly okay. While things didn’t always work, the only solution needed was to observe the problem, deduce what could be possible reasons for its occurrence, then test out my hypotheses one by one. Examples include mathematical errors and faulty wiring. I also did soldering part 2 for the microphone, and honestly the solution was just learning to recognise patterns of problems and solutions based on past mistakes, such as the solder not sticking to the iron (wiping more), or getting fingers burnt (plasters).

I also realise after a full day of reflection that I’m just incompetent at doing group work efficiently. Leaving me in charge is a generally bad idea.

Elizabeth’s Reflection:

For the most bit I felt very challenged by the project, especially since it is the first time we were using and putting together components to make a circuit. for the physical fabrication portion it was the first time I used a solder, and my circuit looked very ugly after that, and I dont really think I improved in that aspect very much even after multiple attempts 🙁 When using the Hot glue gun to insulate the exposed solder I think I made the circuit worse, because there was already a built up of solder.

Also, I did not solder the circuit down the right way apparently. You can only solder your wires to one side of the LED because they are fickle and like to have their electrical charge flowing in one direction. Also, do not solder and hot glue your circuit till you are 100% sure it works, saves you a lot of heartpain and time, (thank you Lei and En Cui for dealing with my screw ups D;).

I also made a few mistakes by piercing the LED strip’s digital pins on accident thinking I can sew it down that way. Thinking about it now, I should have known better then to try piercing any part of the components.

Speaking of computer, I feel very attacked by my own computer, since I think it has issues running the code we shared over google docs, and gave me a heart attack that I might have short circuited the only RGB LED in the starter pack, and still the circuit refused to light after I confirmed that I did not. I think there is something wrong with my computer DX. I either leave the testing for computer to En Cui or find a school computer for this (pick the right computer for this, not all computers have arduino).

If we had a bit more time and I had a bit more skill in soldering, we wish to have more LED lights to reflect the change in sound.

 

Final Project, life update

From last Week’s flow chart, En Cui and I worked on creating a mock up circuit which follows a segment of the flow chart.

Mock Up model 1

We started with a basic microphone set up.

From here we tested to see if we can get the input reading through the serial monitor of the surrounding sounds, and the changes when we spoke into the microphone.

Problems:

  • Serial Montitor showed that the input was either a ‘high’ at 1022/1023, or a ‘low’ 0.

Conclusion at this segment:

  • We thought our microphone was iffy

Nonetheless we continued, as the microphone was still able to detect sound we decided it will be good enough for now and we will solve this issue later.

Mock Up model 2

Subsequently, we added onto the first model to include the LED output.

From here the code was expanded to include a code to control an RBG LED and to read frequency and Volume of the surrounding environment. Initially, the code was done in  random way where for every 3 digits that the frequency had the digit in the hundred place would be the percentage of red, tens the percentage of blue, and ones for green, that would make up the colour that the light bulb would create.

Watch Video at:

Problems:

  • The colour of the lightblub was coming out abit too randomly

So from there we attempted to group a range of frequencies and match them to a colour. Subsequently we made it such that the volume is matched to the brightness of the LED.