SOUND OF STONES
Project: Creating an instrumental system that uses visual data of textures (stones) to generate sound in real time
For the project, there are two parts:
1. Converting three-dimensional forms into visual data
2. Connecting the data with audio for real-time generation
Over week 5, I experimented with the depth image and raw depth data from kinect to processing to see what kind of visual data (colour/brightness, depth, etc) can be obtained from a camera. The kinect has three key components, infrared projector, infrared camera and RGB camera, from which the captured three-dimensional visual data can be sent to processing.
Using ‘depth image’
With the ‘depth image’ sketch, the data values, x (horizontal position) and y (vertical position) of the pixel from the kinect are mapped on recorded image. The sketch involves looking at each pixel (x, y), looking at the index of the depth image and obtaining the colour/ brightness (b – single value between 0 and 255). A rectangle of a fixed size is mapped using z value (depth in 3D space) according to their brightness value b, where things that are dark to appear close and things that are bright to be farther away.
The purpose of this sketch is to see what data values can be obtained from the kinect and see if I use the data as input for audio generation. From this sketch, the depth data from a kinect that can be obtained are x, y, z, b values, that I think can be used to as input data to map textures of three-dimensional forms.
Using ‘Raw depth data’ to map forms on a point cloud
For scanning three dimensional forms, raw depth data (kinect1: 0 to 2048 and kinect2: 0 to 4500) from the kinect might be more useful to generate information about textural surfaces in 3D space. The point cloud can be used to map all the points the kinect is obtaining in a 3D space (from the infrared projector).
By giving each pixel an offset value (= x + y * kinect.width), we get a raw depth value d (= depth[offset]). Each PVector point (x, y, d) on the point cloud can be pushed into the three dimensional plane to map the object the kinect is seeing. For smaller objects (I have yet to try this out), a minimum and maximum depth threshold can be used to look only a particular set of pixels, to isolate an object that is close to the kinect.
For sound generation
Initially, I looked into virtual modular synthesizer program (VCV Rack) to generate the sounds and if they could be coded in real time. However, the programme exists only as a modular synthesizer, a good one, to develop complex audio.
I am interested in sending real-time data from the kinect/ camera/ sensor into an audio-generating software. Referencing Aire CDMX (2016) by Interspecifics, I found out that I could use Python (data access) and SuperCollider (virtual synthesizers) to connect data flow to sounds that I design.
Aire CDMX (2016) by Interspecifics
Aire is a generative sound piece that uses data that environmental sensors pick up in real-time in Mexico City. Using a software written in Python to access the real-time data of pollutants, the data flow is used to animate an assembly set of virtual synthesizers programmed in Supercollider. “In the piece, each one of the pollutants has its own sound identity and the fluctuation of the information modulates all its characteristics.”
From this work, I can study their code as a reference to find a way to map data to designed sounds on Supercollider. As it is my first time working with Python, I might need some help writing the code that specifically works for my project.
Source Code: https://github.com/interspecifics/Aire
For Week 7 Generative Sketch
For the next two weeks, I will be working on connecting the raw data values from the kinect to virtual synthesizers that I will develop on SuperCollider. My aim is to see what sounds can be generated when a three-dimensional object is captured using a kinect.
Direction for Week 7:
1. Connecting kinect data to Python
Some references: on Linux with pylibfreenect2 https://stackoverflow.com/questions/41241236/vectorizing-the-kinect-real-world-coordinate-processing-algorithm-for-speed
2. Experiment with SuperCollider to create virtual synthesizers
3. Connect Python and SuperCollider to generate sound using data flow
For Final Project Generative Study
For the final project, my goal is to map visual textures of materials, in particularly stones, to generate an auditory perception to the material. Rather than using raw depth data, I would like to obtain more data specific to three-dimensional forms.
Ideation – Using Topography data
Topography, in geography, is the study of the arrangement of the natural and artificial physical features of an area. When looking at topographic sketches, I wonder if it can be scaled down to map three-dimensional forms by the circumferences formed by intersecting horizontal planes. I would have to research into 3D scanning software and the type of data that can be obtained. How I imagine it would be convert each layer of visual shapes into a corresponding audio feedback (maybe in terms of how the sound wave is shaped/developed).
AR Sandbox by UC Davis
The AR Sandbox is an interactive tool combined with 3D visualisation applications to create real-time generation of an elevation color map, topographic contour lines and simulated water when the sand is augmented. I think that the 3D software used to track the changes in forms can be applied to my project, where the contour lines generated by the stones can serve as data or input for sound. I would have to research more into this after I complete the experimentation for the generative study.
I would consider using the sensors that were suggested (RGB+ Clear and IR (facture)) to use for capturing data. I would first work with the kinect, but if the data generated is insufficient or not specific, I would consider other options. I would have to think about where to position the kinect and also use the threshold from the kinect raw depth data tutorial to isolate the captured object.
To study texture:
Vija Celmins – photo-realistic nature
Concept development for interaction
Just a possibility:
If the code for capturing real-time visual data is developed enough, I would have the participants collect stones from a walk/ on their path to create generative pieces specific to a journey.
Or it could just exist as a instrumental tool to play around with the sound textures.
Connecting to Performance and Interaction:
I would like to use the developed system on a bigger scale for a performance piece for the semester project for Performance and Interaction class. It would involve capturing human-sized objects in a space on a bigger scale, which would change the threshold of the captured space.
One Reply to “Generative Sketch and Study Updates”
Alina, I am glad to reiterate what I told you in the class:
Great work on your exploratory generative study, and thank you for an exemplar process documentation work! (…if only the links were active, ha ha!). Keep it up!
Your complete strategy seems fine, and I agree that you should try to explore the Kinect scanning and exploit to the max. Although its range and precision ate not intended for small objects relatively close by, maybe you can calibrate the whole setup so that it can work well even with stones, for example.
This would entail lighting conditions + position for Kinect (probably fixed) + the “stage” for the object which will be sonified + the distance + plus shifting/tuning the input data in code.
There is a chance that this setup works well for stones, in which case you can concentrate on refining the physical setup and the code for more robust sonification and overall generative experience instead of re-focusing the generative system around RGB+L and IR sensors.
I like the idea of inviting visitors to pick up the stones on their way from A to B, to be scannified in your work. In that regard, although it is not directly related to your work, look up Hiroyuki Masuyama’s 01.01.2001-31.12.2001 (2002): https://youtu.be/-8e7pSPYZVg.
For sonification, I don’t believe you must use Python. You can go from Kinect via Processing to SuperCollider using OSC.
Here are a few quick links:
You should expand this with your own research.
If you encounter some difficulties there, please let me know, and I will ask Philippe Kocher (who is our next guest speakerand an expert on SC) to try and help you out.
I hope this helps.