Generative Artwork – Stone

CONCEPT 

How do materials sound like? While sound is subjectively perceived, there is a collective or general sentiment on the descriptive qualities of a sound/ soundscape. We occasionally use tactile qualities (eg. rough, soft) to describe sounds, to assign textures to what we hear. In music, texture is the way the tempo, melodic and harmonic materials are combined in a composition, thus determining the overall quality of the sound in a piece. However, most sounds that we hear are more complex than simple harmonies, there is a more complicated process behind how we can perceive and cognitively recognised textures in sound. My project explores how visual textures of physical materials (stones) can be translated to the auditory, creating an interactive system to draw out the direct link between sound and textures.

CASE STUDY – Classification of Sound Textures

Saint-Arnaud, N. (1995). Classification of sound textures (Doctoral dissertation, Massachusetts Institute of Technology).

http://alumni.media.mit.edu/~nsa/SoundTextures/

“The sound of rain, or crowds, or even copy machines all have a distinct temporal pattern which is best described as a sound texture.” (Saint-Arnaud, 1995). Sounds with such constant long term characteristics are classified as Sound Textures. The thesis investigates the aspects of the human perception and machine processing of sound textures.

“Between 1993 and 1995, I studied sound textures at the MIT Media Lab. First I explored sound textures from a human perception point of view, performing similarity and grouping experiments, and looking at the features used by subjects to compare sound textures.

Second, from a machine point of view, I developed a restricted model of sound textures as two-level phenomena: simple sound elements called atoms form the low level, and the distribution and arrangement of atoms form the high level. After extracting the sound atoms from a texture, my system used a cluster-based probability model to characterize the high level of sound textures. The model is then used to classify and resynthesize textures. Finally, I draw parallels with the perceptual features of sound textures explored earlier, and with visual textures.”

His approach to sound textures from both the perspectives of subjective human perception and technical synthesis and processing of sound textures is especially relevant to my project. The difference is that the thesis explores the perceptual and machine constructs of existing sound textures, while I am trying to generate sounds that could be perceived/ associated with the actual tactile qualities of a material. I could say that my process (associating texture to sound) works in the opposite direction from his (associating sound to texture).

Aiming to extract a sound from the visual textures of stone involves two aspects:
1. Subtracting three-dimensional forms of the material into visual data that can be converted into audio/ sounds
2. Designing a model that generates sounds that the audience would perceive/ associate the tactile qualities of the specific material.

PROCESS

While his project is conducted on a large scale with the abundance of time and resources (MIT lab), it was conducted a while back (1993-1995). With the introduction of many audio-generation software and sound visualisation/ analysis technologies, my project might be feasible in the short span of time that we have (6 more weeks to Week 13).

Visual textures to Sound – WaveTable Synthesis 

I previously explored wavetable synthesis on SuperCollider (one wavetable to another), but I was not able to visualise a combined three-dimensional model of the periodic waveforms with the sound produced (only singular wave forms). For me to design a model for sound textures, I would need to be able to connect the sound I hear to 3D wavetable. I would have to experiment more with sound synthesis to decide on the type and range of sounds I would like to generate. Based on the above paper, I would look at perceptual features of sound textures to formulate how I could associate a tactile quality/ mood to what we hear.

Ableton Live 10: Sound Design with Wavetable

Obtaining visual data/ 2D forms from 3D material – 3D scanning and processing 

To be able to generate data in real-time based on the stone used and its movement, I would need a software to process the captured data (RGB data/ 3D scanning) by the kinect/ camera.

Possible software: 3D Scanner Windows 10 App, OnScale, CocaoKinect

Kinect 3D scanning in OSX with CocoaKinect

http://mskaysartworld.weebly.com/modeling-and-animation-basics.html
https://www.extremetech.com/extreme/236311-researchers-discover-how-to-shape-sound-in-3d

Researchers discover how to shape sound in 3D

I would have to obtain the 2D line vectors/ waves from a 3D scan and use them to compose a 3D wavetable to be played ideally in real-time.

 

Connecting Both – Open Sound Control (OSC)

I have previously worked with OSC to connect processing to SuperCollider, I will continue working on it. I would also like to find a way to design a more complex soundscape on SuperCollider by either inputting the designed wavetable synthesis or individually manipulating the way the sound is played for each sound (pitch, amp, freq, etc).

If the wavetable does not work out:

I would then use data values instead of visualised lines. Referencing Aire by Interspecifics, I would assign individual data line/points in a 3D space to specific designed synthesizers programmed on SuperCollider. Depending on the 3D forms thus there would be varying depth values on the data line, different sounds would be produced and played in consonance/ dissonance.

I would still have to study the soundscape design of Aire on their github to learn more about sound design and layering: https://github.com/interspecifics/Aire

SURPRISE

The concept of surprise is explored in my project through the dissonance between our perceptive modalities, where we usually generalise that they are mutually exclusive when we know they are not. Connecting the links between what we see, hear and touch, I am trying to touch upon the unnoticed relationships between our senses and try to play around with how our minds cognitively make sense of the world.

With sound as an intangible material, we don’t usually connect what we physically see to what we hear. When an instrument is played (eg. a key is pressed, a string is struck), the act serves as a trigger but it is not the only component to the sound we hear. Being human, we can’t fully understand how a sound is visually and technically synthesised beyond what we hear. With technology, we are able to work with the other end of the spectrum of generating sound (where we work from the visual/ components of sound).

Working with textures, I want to associate tactile quality to sound to enhance the experience with a material and its materiality. Stones are intriguing in themselves, even though seemingly fixed, their forms are intrinsically beautiful. Connecting their visual textures to sound seems like a way to explore their spirit.

TIMELINE

Week 9-10: Work on Sound/ wavetable synthesis + Obtaining visual data
Week 11-12: Design soundscape and interaction with stones
Week 13: Set-up

One Reply to “Generative Artwork – Stone”

  1. Excellent post, thank you!

    I watched briefly Eli Fieldsteel’s video on GS in SC in his great tutorial series, and it seems to my untrained = mainly intuitive mind that you actually can use GS for a variety of complex sonic texture generations by reading even relatively simplified visual data from Kinect.

    This would require some experimentation with GS tools in SC and, of course, a consultation with Philippe should be helpful.

    Please outline your visual data input and overall method and ask him first what type/technique of sound synthesis would he recommend, and then ask for his advice on the usefulness of GS in particular.

Leave a Reply