Generative Artwork – Approaches for sonic synthesis from visual data

Using the sound generation system I used in my generative sketch:

MediaMob – Heliostat Field

RECAP for Generative Artwork:

Aiming to extract a sound from the visual textures of stone involves two aspects:
1. Subtracting three-dimensional forms of the material into visual data that can be converted into audio/ sounds
2. Designing a model that generates sounds that the audience would perceive/ associate the tactile qualities of the specific material.

POSSIBLE APPROACH

  1. Obtaining visual data from camera (kinect)
    – Depth value, x, y
    – 3D scanning – line/ wave data
    – Breaking 3D forms into grains
  2. Visual textures to Sound – WaveTable Synthesis
    – Ableton Live
  3. Granular Synthesis – Complex sonic textures
  4. Real-time generation of sound – OSC SuperCollider
    i. When the depth value hits a certain value, the wavetable will be triggered.
    ii. Granular synthesis using spatial data of the object
    iii. Using data points in 3D space, when it is triggered, a certain sound/ virtual synthesizer is played.

DIRECTION

  • Studying ‘Classifications of sound textures’ to design sounds that relate to tactile qualities
  • Granular Synthesis tutorial
  • Feedback from guest lecture

Leave a Reply