Audio Experience – Seeds

Seeds

Can seeds talk? What happens when they all “talk” together? Would it be chaotic, relaxing, or disruptive? Exploring the use of seeds to create an audio experience that creates ASMR (autonomous sensory meridian response), a relaxing, often sedative sensation that begins on the scalp and moves down the body. Also known as “brain massage,” it is triggered by placid sights and sounds such as whispers and crackles. 

An audio experience that (hopefully) soothes and relaxes your mind :) 

  1. Please listen with earphones 
  2. Please listen with your eyes closed :)

 

Script

  • (CHIA SEED) Roll around from right ear to left ear and end at the right ear (x2) 
  • Roll down right ear
  • Roll down behind the head
  • Roll down the left ear
  • Roll down the front of the head

 

  • Transition to (BIRDSEED)
  • Roll up to the top

 

  • Roll around from right ear to left ear to the right again

 

  • Open container

 

  • Drop seeds from above
  • Drop seeds from left
  • Drop seeds from right

 

  • Circle around the head with the seed tray

 

  • BOOM in your face

 

References:

https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/asmr-videos-youtube-trend

 

Flow.mo: Final Performance Documentation

Done by Daryl, Yenee, and Ashley

Flow.mo

Trailer:

Location: Truss Room

Flow.mo is a performance inspired by the ‘Butoh’ dance. ‘Butoh’ dance is frequently regarded as surreal and androgynous and focuses on primal expressions of the human condition rather than physical beauty. The performance involves a conductor (one of us) who controls the rhythm of the backing track and instruments that are playing in the space. 3 other performers will be controlling an instrument each and 1 will be controlling the light projection. In total, there will be 5 performers. We hope to encourage our performers to move with their feelings.

 

Technologies used

Devices used: 3 computers, 5 phones, 3 projectors, speaker

 

Zigsim: to obtain values like gravity/acceleration/gyro/2d touch from the phone

 

TouchDesigner: for light projection, “middle man” between Zigsim and Ableton.

One computer is used to take in values from 3 phones (that controls one instrument each). It controls which note to play based on how high or low the phone is (gravity values). 3 performers will hold 1 phone each.

The values are connected to another computer that controls the sounds and rhythm on Ableton. The rhythm is controlled by how high or low the phone is (gravity values). This phone is held by the conductor (Daryl)

The light particles will move according to the gravity, acceleration, and gyro values from the last phone. 

 

Ableton: to play the backing track and instruments

 

Flow of performance

In order to ease our performers into performing, one dancer (Daryl) will move together with them and controls the rhythm of the performance. Instructions are also given throughout the performance by a speaker (Yenee). The conductor (Ashley) will play certain tracks (intro sound, breathing sound, solo tracks for each instrument) based on the instructions given by the dancer/speaker. 

The instructions can be found here: Flow Motion Script

 

Video documentation

Full performance:

 

References

https://www.japantimes.co.jp/culture/2016/05/28/books/book-reviews/butoh-dance-death-disease/

Flow.mo Update

Tech updates:

We managed to use values from ZigSim and used TouchDesigner to process it and translate it to Ableton to play the notes of different instruments. 

Venue: Truss Room 

Participants: 5 participants (1 controls the light, 1 controls the beat/drum, the other 3 will control 3 different instruments.

The technology needed: 5 phones, 1 projector, speakers, possibly 2/3 computers

Moodboard for music: 

The expected flow of performance: 

  • A device (phone) will be attached to a participant depending on the instrument they are controlling. (E.g. person controlling tempo will need lesser mobility, thus the device will be attached to their forearms/head. The person controlling one instrument would need more mobility, so the device can be attached to their palms.)
  • The participant will enter the room and will be asked to lay on the floor.
  • Instructions will be given through a voice track initiating the performance (To prep participants mentally to enjoy the performance/move with their feelings)
  • A backing track will be played throughout the whole performance.
  • Participants will move their bodies and different movements/different positions will trigger their instrument/light to change note/colour of light projection
  • The end of the performance will be indicated by the light projection turning black and the volume of the instruments, the beat, and the backing track eventually decreasing to no sound.

Weekly Plan:

Week 11: Refinement (Values from Zigsim + Music Refinement)

Week 12: Testing out all components (not at actual location)

Week 13: Performance at the actual location

 

Instructional Art: Individually Together

Individually Together

Individually Together is a project that showcases the creation of participants in a way that connects all of them together. In order to do that, participants are given a template to fill in that is created by me. How they want to fill in the spaces is all up to them! 

The base video is made (shown later) in order to determine the total number of frames needed to create the final piece. The total duration of the video is 20 seconds and played at 3 frames per second, so the total frames needed are 60 frames. They are then split into sets of 4, which requires me to find 15 participants. 

Once participants are willing to take part in this project, they will receive a zipped folder containing: A “Read me” image containing the instructions, A “Final Photos” folder, and A “Frames” folder containing the frames/templates to be drawn on. 

“Read me” Image that contains the instructions
Example of how one frame looks like

Participants are given 5 days (Wed to Sun) to fill up the spaces and send them to me once they are done. 

Some process photos by the participants!!

Example of how a finished frame looks like

The final frames given by the participants are all unique and personal to them. The participants also had a fun time guessing who drew which frame after they have seen the final video as they could identify the creator’s personality/style through their creations.

Final Video

Assignment 1C: Expressive Typography

Version 1

Magnify

 

Bloom
Fall
Collapse
Shoot
Scatter
Invisible
Waves
Fade
Moving

Final Version

Magnify – Increased the scale of the “magnifying glass”
Bloom – Included “flowers” that are not bloomed yet to fill up space
Collapse – Increased size for the object that starts the fall. Added particles falling too.
Fall – Decreased size of “F LL”. Repetition for letter “A” with changes in size from small to big to show that it’s falling towards the viewer.
Shoot – Increased size of the hoop and change its type to make it more obvious
Scatter – No change
Invisible – Made use of the space and decreased the amount of black shown on the letters to make it more “invisible”
Waves – No change
Fade – No change
Moving – Changed the layout and made use of the space

Zoom Performance: Sharing

Sketch

During the circuit breaker period, people were sending food to each other using food delivery apps like Grab to show appreciation/love/encouragement for each other. Since we can’t be physically there for each other and we were always video calling each other, I thought it would be fun to make use of the green screen function from Zoom to make it seem like we are physically together. 

Person 2 will use the pre-recorded video as their green screen background, and Person 1 (the one eating) will appear in the middle of the video to share the food with Person 2. This looks as if Person 1 is physically present at Person 2’s location and as if Person 1 cloned and teleported herself. Person 2 has to act that Person 1 is actually there to respond to the pre-recorded video. 

Final Performance with Daryl & Gwendolyn

“Stop motion” performance with Yixue