Skip to toolbar

CategoryFinal Project

Interactive II Progress (28th Mar)

 

Winzaw and I looked at some tutorials found on youtube and tried to mash some of them together.

So the patch records voice for 8 seconds and playback in a robotic tune for twice over. and this audio playback will affect the particles to generate some visual feedback. Afterwhich the process happens again automatically. had to create a randomizer to randomize the voice playback so that it will alter the voice dynamically.

This is to see what we can do with particles and the audio feedback and how to combine. We will see how this can be applied to our ideas for the project

 

To see the rest of our progress:

21st Mar   28th Mar   31st Mar   4th Apr   11th Apr 14th Apr

Final submission

Interactive II Project Proposal (21st Mar)

Our team consists of Winzaw and Fabian.

Here is our progress in chronology:

21st Mar   28th Mar   31st Mar   4th Apr   11th Apr 14th Apr

Final submission

Main Idea:

Our project will be split into two separate endeavors which eventually be combined into a seamless interactive experience. We are interested in the distortion of sound and images that respond to each other in a cohesive manner. This video is an example to illustrate.

Aims:

  1. Sound. We want to have the patch constantly recording and playingback things people say to it. So this will probably be the basis for interaction. No buttons or sliders, just purely saying stuff to the patch.
  2. Visual. Based on the pitch, frequency of the sounds playingback, we will get the jitter to generate visuals, either in the form of particle systems or in the form of real-time distortion of the images captured via webcam.

 

Timeline:

  • 28th Mar. Working patch for sound recorder and playback (with/without distortion)
  • 4th Apr. Working patch for visuals (i) in terms of particle systems responding to the sound recorded or (ii) in terms of distortion of the webcam grabbed image (if that is possible)
  • 11 Apr. Connected patch for the two endeavors and fine-tuning the timing and sequencing for interaction