FYP 20/21 Week 12 Updates

LIGHT TEXTURES CLASSIFICATION 

Framework for perceptual experience and understanding of light

LIGHT
– Atmosphere
– Luminosity framework – Spatial structure, Appearance, Source

  • Further classify the descriptions into ‘tactile’ or ‘textural’ qualities
    > The descriptions for the ‘atmosphere’ and ‘appearance’ of light hold most words that can be translated into tactile qualities
  • Can induce sensory feedback – haptic qualities

LIGHT SOURCE

  • I would have to determine what kind of light source I would like to work with – natural vs artificial

INTERACTIVE SPACE
– Transform the way light is experienced in a space -> induce/ trigger a sense of touch/ haptic feedback

Using vibration/ proprietary muscle response:

Some sketches


LIGHT IN SPACE – ARCHITECTURE

Explore the difference in visual textures due to the interaction of light and materials

Church of Light (1989), Tadao Ando
Thermal bath, Vals, Switzerland (1996), Peter Zumthor


ARTISTS 

http://www.phonomena.net/hf/

http://www.phonomena.net/ilinx/

Doug Wheeler

SPACE AND PLACE – Yi-Fu Tian

“What sensory organs and experiences enable human beings to have their strong feeling for space and for spatial qualities? Kinaesthesia, sight and touch

  • Movements such as the simple ability to kick one’s legs and stretch one’s arms are basic to the awareness of space. Space is experienced directly as having room in which to move. Moreover, by shifting from one place to another, a person acquitters a sense of direction. Forward, backward and sideways are experientially differentiated, that is, known subconsciously in the act of motion.
  • Space assumes a rough coordinate frame centred on the mobile and purposive self . Human eyes, which have bifocal overlap and stereoscopic capacity, provide people with a vivid space in three dimensions.
  • Experience, however, is necessary to perceive the world as made up of stable three-dimensional objects arranged in space rather than as shifting patterns and colours.
  • Touching and manipulating things with the hand yields a world of objects – objects that retain their constancy of shape and size. Reaching for things and playing with them disclose their separateness and relative spacing.
  • Purposive movement and perception, both visual and haptic, give human beings their familiar world of disparate objects in space. Space is given by the ability to move. Movements are often directed toward, or repulsed by, objects and places. Hence, space can be variously experienced as the relative location of objects or places, as the distances and expanses that separate or link places, and – more abstractly – as the area defined by a network of places.

CHAPTER 4 – Body, Personal Relations and Spatial Values

Fundamental principles of spatial organization:
1. The posture and structure of the human body
2. The relations (whether close or distant) between human beings

DIRECTION FOR FINAL FORM – Interactive Space
– Other interesting aspects to consider:
1 . The physicality of space vs. the intangibility of light – Interactions

2. Movement of bodies and bodily actions – walking vs. reaching out

3. Tactile sensations/ vibration – real or artificial?

4. The dimension of time – Can spatial awareness be translated by the relative amount of time spend in a space?
> What can be an indication of time?

5. Temperature – Can give a new sensory experience?

6. Time/ Site-specificity of light – sun light

Over the next 2 weeks:

‘Light textures’ Classification framework
Sensory experiments/ study + results > Creating light atmospheres with different qualities
Prototype of finger tactile sensory substitution (TSS) device to sense ambient light (space)

Deliverables for final presentation:

  • Light Textures classification framework
  • 360 recordings of the sensory experiments/ immersive atmospheres (would be used for my web design site for fyp documentation) + touch haptic feedback
  • One good ‘live’ sensory experiment to be experienced in the ‘perceptual box’
  • Prototype of the finger TSS device for spatial/ light sensing

Reading Assignment 3 – “Framework for Understanding Generative Art”

Dorin, Alan, Jonathan McCabe, Jon McCormack, Gordon Monro, and Mitchell Whitelaw. “A framework for understanding generative art.” Digital Creativity 23, no. 3-4 (2012): 239-259.

Aim of the Paper 

In the process of analysing and categorising generative artworks, the critical  structures of traditional art do not seem to be applicable to “process based works”. The authors of the paper devised a new framework to deconstruct and classify generative systems by their components and their characteristics. By breaking down the generative processes into defining components – ‘entities (initialisation, termination)’, ‘processes’, ‘environmental interaction’ and ‘sensory outcomes’ – we are able to critically characterise and compare generative artworks which underlying generative processes, as compared to outcomes, hold points of similarity.

The paper first looked at different attempts at and previous approaches of classifying generative art. By highlighting the ‘process taxonomies’ of different disciplines which adopt the “perspective of processes”, the authors used a ‘reductive approach’ to direct their own framework for the field of generative art in particular. Generative perspectives and paradigms began to emerge in the various seemingly unrelated disciplines, such as biology, kinetic art and time-based arts and computer science, adopting algorithmic processes or parametric strategies to generate actions or outcomes. Previous studies explored specific criterions of emerging generative systems by “employing a hierarchy, … simultaneously facilitate high-level and low-level descriptions, thereby allowing for recognition of abstract similarities and differentiation between a variety of specific patterns” (p. 6, para.4). In developing the critical framework for generative art, the authors took into consideration of the “natural ontology” of the work, selecting a level of description that is appropriate for the nature. Adopting “natural language descriptions and definitions”, the framework aims to serve as a way to systematically organise and describe a range of creative works based on their generativity.

Characteristics of ‘Generative Art System’ 

Generative art systems can be broken down into four
(seemingly) chronological components – Entities, Processes, Environmental Interaction and Sensory Outcomes. As generative art are not characterised by the mediums of their outcomes, the structures of comparison lie in the approach and construction of the system.

All generative systems contain independent actors ‘Entities’ whose behaviour is mostly dependent on the mechanism of change ‘Processes’ designed by the artist. The behaviours of the entities, in digital or physical forms, may be autonomous to a certain extent decided by the artist and determined by their own properties. For example, Sandbox (2009)  by artist couple Erwin Driessens and Maria Verstappen, is a diorama of a terrain of sand is continuously manipulating by a software system that controls the wind. The paper highlight how each grain of sand can be considered the primary entities in this generative system and how the system behaves as a whole is dependent on the physical properties of the material itself. The choice of entity would have an effect on the system, such as in this particular work where the properties of sand (position, velocity, mass and friction) would have an effect on the behaviour of the system. I think the nature of the chosen entities of a system is important factor, especially when it comes to generative artworks that use physical materials.

The entities and algorithms of change upon them also exist within a “wider environment from which they may draw information or input upon which to act” (p. 10, para. 3). The information flow between the generation processes and their operating environment can be classified as ‘Environmental Interaction’, where incoming information from external factors (human interaction or artist manipulation) can set or change the parameters during execution which leads to different sets of outcomes. These interactions can be characterised by “their frequency (how often they occur), range (the range of possible interactions or amount of information conveyed) and significance (the impact of the information acquired from the interaction of the generative system)”. The framework also classifies interactions as “higher-order” when it involves the artist or designer in the work, where he can manipulate the results of the system through the intermediate generative process or adjusting the parameters or the system itself in real-time, “based on ongoing observation and evaluation of its outputs”. The higher order interactions are made based on feedback of the generated results, which hold similarities to machine learning techniques or self-informing systems. This process results in changes to its entities, interactions and outcomes and can be characterised as “filtering”. This high order interaction is prevalent in the activity of live-coding, relevant to audio generation softwares which supports live coding such as SuperCollider, Sonic Pi, etc, where performance/ outcome tweaking is the main creative input.

The last component of generative art systems is the ‘Sensory outcomes’ and they can be evaluated based on “their relationship to perception, process and entities.” The generated outcomes could be perceived sensorially or interpreted cognitively as they are produced in different static or time-based forms (visual, sonic, musical, literary, sculptural etc). When the outcomes seems unclassifiable, they can be made sense of through a process of mapping where the artist decides on how the entities and processes of the system can be transformed into “perceptible outcomes”. “A natural mapping is one where the structure of entities, process and outcome are closely aligned.”

Case studies of generative artworks

The Great Learning, Paragraph 7 – Cornelius Cardew (1971)

Paragraph 7 is a self-organising choral work performed using a written “score” of instructions. The “agent-based, distributed model of self-organisation” produces musically varying outcomes within the same recognisable system, while it is dependent on human entities and there is room for interpretation/ error in the instructions, similarities with Reynolds’ flocking system can be observed.

Tree Drawings – Tim Knowles (2005)

Using natural phenomena and materials of ‘nature’, the movement of wind-blown branches to create drawings on canvas. The found process of natural wind to be used as the generator of movement of the entities highlights the point of the effect of physical properties of chosen materials and of the environment. “The resilience of the timber, the weight and other physical properties of the branch have significant effect on the drawings produced. Different species of tree produce visually discernible drawings.”

The element of surprise is included in the work, where the system is highly autonomous, where the artist involvement includes the choice of location and trees as well as the duration. It brings to mind the concept of “agency” in art, is agency still relevant in producing outcomes in generative art systems? Or is there a shift in the role of the artist when it comes to generative art?

 

Tree Drawings, Tim Knowles

The Framework on my Generative Artwork ‘SOUNDS OF STONE’ 

Point-cloud visualisation system for “Stone”

Visual system:

Work Details
‘Sounds of Stone’ (2020) – Generative visual and audio system

Entities
Visual: Stones, Points
Audio: Stones, Data-points, Virtual synthesizers

Initialisation// Termination:
Initialisation and termination determined by human interaction (by placing and removing a stone within the boundary of the system)

Processes
Visual and sound states change through placement and movement of stones
Each ‘stone’ entity performs a sound, where each sound corresponds to its visual texture (Artist-defined process)
Combination of outcomes depending on the number of entities is in the system
“Live” where artist or performer or audience can manipulate the outcome after listening/ observing the generated sound and visuals.

Environmental Interaction
Room acoustics
Human interaction, behaviours of the participants
Lighting

Sensory Outcomes
Real-time/ live generation of sound and visuals
Audience-defined mapping

As the work is still in progress, I cannot evaluate the sensory outcomes of the work at this point. According to the classic features of generative systems used to evaluate Paragraph 7 (performative instructional piece) such as “emergent phenomena, self-organisation, attractor states and stochastic variation in their performances”, I predict that the sound compositions of “Sound of Stones” will go from being self-organised to chaotic as the participants spend more time within the system. Existing as a generative tool or instrumental system, I predict that there will be time-based familiarity with the audio generation with audience interaction. With ‘higher order’ interactions, the audience will intuitively be able to generate ‘musical’ outcomes, converting noise into perceptible rhythms and combinations of sounds.

FYP 20/21 Week 10 Updates and direction for Week 14

Week 10:

SENSORY EXPERIMENTS – Perceptual box

I built the perceptual box 🙂 I can now conduct my visual-tactile experiments.

Some Visual Experiments:

I experimented with two visual qualities of light that can be experienced visually in my perceptual box.

Some tests:

 

A. RHYTHM

Playing with the ‘blinking’ LED to create rhythm with light
> Varying the interval delay – Create different rhythms

B. MOTION
Using a servo motor to create a rotating wheel to vary the light source.

‘Sensory Modality Transition’ Experiments – Visual-Tactile  (Week 12)

Aim: Associate tactile qualities with Visual Qualities of light

CONSONANCE and DISSONANCE

  • Combine textures with the perceived/ disconnected tactile quality of light
    eg. wet texture to some form of light
  • Using the experimental framework: three perceptual mode conditions
    Visual expectation (V), Touch alone (T) and Touch following a visual expectation (VT)
  • To evaluate:
    Perception of disconfirmation – difference between VT and V
    Expectation effect – difference between VT and T
    Perceptual incongruence – different between V and T

 

Light Textures Classification (Week 11-12)

Referencing ‘Sound Textures Classification’ http://alumni.media.mit.edu/~nsa/SoundTextures/

Human Perception of Sound Textures
– Perception experiments which show that people can compare sound textures, although they lack the vocabulary to express formally their perception

  • the subjects share some groupings of sounds textures
  • 2 major ways used to discriminate and group textures: The characteristics of the sound (periodic, random, smooth) and the assumed source of the sound (voices, water, machine)

Texture or not?

Perceived Characteristics and Parameters:
Come up with possible characteristics and qualifiers of sound textures, and properties that determine if a sound can be called a texture.

CAN I DO THE SAME FOR HUMAN PERCEPTION FOR LIGHT TEXTURES?
Classify light textures
> Perceived Characteristics and Parameters – Quantifiers and Qualifiers

ISEA DATA gloves workshop – Hugo Escalpalo 

Input gloves
– Measures specific finger anatomic and gestural positions as input
– Haptic feedback: Not as output. The design of the glove has in-built mechanism to incorporate proprietary feedback using Slider Sensors and Springs. When pulled forward, the gloves use resistance to simulate the feeling of grabbing an object.
– Intended to include tracking components to track position in VR

https://www.youtube.com/watch?v=MP9aJ5ThwK0

https://www.youtube.com/watch?v=Be3NTYqN0vA

Week 12-13:

  • Come up with a lo-fi prototype for gloves that does not require tech, Using springs to trigger a sense of haptic touch based on gestural and action-based movements (articulation)
  • Weight based simulation – Using liquid and hand position to simulate a sense
  • Output gloves – Vibration (fingers and palms), Pressure (pull-back action using servo-motors), Weight, Proprietary position or feedback

Direction for Week 14:

  1. LIGHT TEXTURES CLASSIFICATION – create framework for perceptual experience and understanding
  2. GLOVE (low-tech/ haptic prototype) – design to impart a sense of touch of solid objects
  3. SENSORY DISSOCIATION/ ASSOCIATION EXPERIMENTS with light textures

FYP 20/21 Week 8 Updates

SENSORY EXPERIMENTS

Perceptual box

I am still in the midst of designing my “perceptual box” and fine tuning the details of my sensory experiments.

Referencing:

Yanagisawa, H., & Takatsuji, K. (2015). Effects of visual expectation on perceived tactile perception: An evaluation method of surface texture with expectation effect. International Journal of Design9(1).

Surface Texture

  • People perceive and/or predict a surface’s characteristics corresponding to each physical attribute through sensory information, a process that we call perceived features (eg. Surface roughness perceived through touch).
  • Using a combination of perceived characteristics of surface texture, people perceive a tactile quality, such as “nice to touch”.

Sensory modalities

  • During such sensory modality transitions, we expect or predict the perpetual experience that we might have through a subsequent sensory modality by first using a prior modality, such as in the case of expecting a particular tactile perception by first looking at a surface texture.
  • On the other hand, prior expectation also affect posterior perceptual experiences – a phenomenon known as the expectation effect.
  • 3 aspects of experiment:
  • Participants were asked to evaluate the tactile quality of a target object under three perceptual mode conditions:
    Visual expectation (V), Touch alone (T) and Touch following a visual expectation (VT)
  • To evaluate:
    • Perception of disconfirmation – difference between VT and V
    • Expectation effect – difference between VT and T
    • Perceptual incongruence – different between V and T

Measurement:
Evaluate Tactile feeling using four opposite adjective pairs (“nice to touch-unpleasant to touch”, “smooth-rough”, “hard-soft” and “sticky-slippery”) – Between each adjective pair was a scale comprised of five ranges. Participants responded to each adjective scale by marking the rating on a questionnaire sheet, which employed a semantic differential (SD) scale.

SENSORY EXPERIMENTS 

PART 1
Things I want to explore when I finish constructing the perceptual box. I went back to look at my hypothesis for my project, and I would like to center my experiments through connecting the visual properties of light to the visual qualities of material, and see if a tactile quality can be associated.

  1. Playing with Rhythm
    Light quality: Rhythm?
    Visual quality: PatternTrying to connect how rhythmic/ flashing lights to a tactile effect

  2. Ryoji Ikeda – Test Pattern (2013)
    https://www.youtube.com/watch?v=XwjlYpJCBgk
  3. Playing with Colour
    Light quality: Frequency?
    Visual quality: ColourConnecting a texture (frequency) to colour

    Using touch sensors
  4. Playing with Colour
    Light quality: Intensity/ Diffusion
    Visual quality: Opacity/ TranslucencyTrying to connect force with sharpness of light

 

 

Slide potentiometer

PART 2

MATERIAL PERCEPTION – Can we perceive/ infer material based on visual results of reflected light?

  • Playing around with material’s interaction with light, if light can be given visual forms/ textures
  • Recreate the textures/ find tactile similarities with physical materials

Build a hologram in my perceptual box:

https://mashable.com/2016/10/24/holovect-3d-projections-star-wars/

 

Interesting Light phenomenon/ visual textures I noticed:

Ryoji Ikeda – Spectra (2014)

 

Reminds me of:

Interpretive flare display of unthought thoughts (2020) neugerriemschneider, Berlin Photo: Jens Ziehe

https://olafureliasson.net/archive/artwork/WEK110941/interpretive-flare-display-of-unthought-thoughts

 

‘Touch’ in Art – What would trigger touch?

Last week, it was mentioned that I could look into works that trigger touch, but I missed out on the example that was given.

Vocab? – To find consensus between descriptions of how light is subjectively perceived

ISEA 2020 – Why Sentience?

DATA GLOVES Workshop 17-18 October

The “Data Gloves” were developed with the purpose of interacting with the VR environment “Human After”, a piece by Anni Garza Lau. Under the problem of the high cost of a set of commercial gloves, we realized that we had the ability to manufacture a pair of gloves with a very detailed ability to acquire information about the position of the fingers for a fraction of the price.

https://www.radiancevr.co/artists/anni-garza-lau/

I am not sure how useful the gloves would be in terms of simulating haptic qualities, instead as a glove to obtain data but we will see how it goes.

Creative Industry Report – House of Light by James Turrell

The House of Light, by James Turrell, is located in a hillside forest in Tokamachi, Niigata, Japan. House of Light exists as an experimental work of art that also serves as a guesthouse, where visitors can experience the light with their entire bodies during their stay. Using light as the main medium, which is key in his perceptual works, Turrell combined the intimate light of a traditional Japanese house with his works of light. Both time-specific and site-specific, “House of Light” serves as an experience of living in a Turrell work overnight.

“Outside In” by James Turrell
Image from: https://www.gadventures.com/blog/japan-house-of-light-james-turrell/

For Turrell who has been searching for and exploring the “perception of light”, House of Light is his attempt to contrast and merge day and night, East and West, Tradition and Modernity. One of the two works that can be experienced in the House of Light is “Outside In”, which is a tatami room with a ceiling that can be exposed to the sky using a sliding panel on the roof. As of Turrell’s Skyspaces, which are constructed chambers with an aperture exposing the sky, the work “Outside In” intended for the viewers to be able to “live in the light”.

“Light Bath” by James Turrell

Another work that can be experienced in House of Light is “Light bath”, which explores light in an indoor space, experienced by those who stay overnight at the House of Light. Using optical fibres than run through the bathroom, while being immersed in the tub, the viewer can experience the interplay of light and shadow against the water.

“House of Light” by James Turrell

The House of Light exists in itself as a work of art and also as an accommodation facility where you can interact with the art spaces. Turrell established the terms of the experience where he wanted the guests to interact and discuss their thoughts overnight as they share the world of light and shadow. As a fully functioning guesthouse, it charges 4000 yen (per person, plus a facility charge of 20000 yen for the building, which is divided evenly between guests.

I think it is intriguing that a work of art can function as a business establishment that sells the time-based experience of art. It alters the conventional way of interacting with an artwork, going beyond a short-term immersion in space. Relating to the concept of ‘experience’, the article “Welcome to the Experience economy” explores the emerging fourth economic offering of experience. As consumers unquestionably desire experiences, more and more businesses are responding by explicitly designing and promoting them. 

House of Light serves as an example of this direct relationship, where guests pay for the ephemeral experience. Considering a more philosophical aspect of the experiential perspective, which involves sensation, perception and cognition, how can I apply the novelty of sensory and perceptual experiences to the real world? The idea of using ‘place’ and giving it an experiential perspective seems to have a viable market, but it should be created for a targeted group that understands the work and would like to experience the art as it is intended. 

References

Pine, B. J., & Gilmore, J. H. (1998). Welcome to the experience economy. Harvard business review76, 97-105.

Tuan, Y. F. (1977). Space and place: The perspective of experience. U of Minnesota Press.

 

Generative Artwork – Stone

CONCEPT 

How do materials sound like? While sound is subjectively perceived, there is a collective or general sentiment on the descriptive qualities of a sound/ soundscape. We occasionally use tactile qualities (eg. rough, soft) to describe sounds, to assign textures to what we hear. In music, texture is the way the tempo, melodic and harmonic materials are combined in a composition, thus determining the overall quality of the sound in a piece. However, most sounds that we hear are more complex than simple harmonies, there is a more complicated process behind how we can perceive and cognitively recognised textures in sound. My project explores how visual textures of physical materials (stones) can be translated to the auditory, creating an interactive system to draw out the direct link between sound and textures.

CASE STUDY – Classification of Sound Textures

Saint-Arnaud, N. (1995). Classification of sound textures (Doctoral dissertation, Massachusetts Institute of Technology).

http://alumni.media.mit.edu/~nsa/SoundTextures/

“The sound of rain, or crowds, or even copy machines all have a distinct temporal pattern which is best described as a sound texture.” (Saint-Arnaud, 1995). Sounds with such constant long term characteristics are classified as Sound Textures. The thesis investigates the aspects of the human perception and machine processing of sound textures.

“Between 1993 and 1995, I studied sound textures at the MIT Media Lab. First I explored sound textures from a human perception point of view, performing similarity and grouping experiments, and looking at the features used by subjects to compare sound textures.

Second, from a machine point of view, I developed a restricted model of sound textures as two-level phenomena: simple sound elements called atoms form the low level, and the distribution and arrangement of atoms form the high level. After extracting the sound atoms from a texture, my system used a cluster-based probability model to characterize the high level of sound textures. The model is then used to classify and resynthesize textures. Finally, I draw parallels with the perceptual features of sound textures explored earlier, and with visual textures.”

His approach to sound textures from both the perspectives of subjective human perception and technical synthesis and processing of sound textures is especially relevant to my project. The difference is that the thesis explores the perceptual and machine constructs of existing sound textures, while I am trying to generate sounds that could be perceived/ associated with the actual tactile qualities of a material. I could say that my process (associating texture to sound) works in the opposite direction from his (associating sound to texture).

Aiming to extract a sound from the visual textures of stone involves two aspects:
1. Subtracting three-dimensional forms of the material into visual data that can be converted into audio/ sounds
2. Designing a model that generates sounds that the audience would perceive/ associate the tactile qualities of the specific material.

PROCESS

While his project is conducted on a large scale with the abundance of time and resources (MIT lab), it was conducted a while back (1993-1995). With the introduction of many audio-generation software and sound visualisation/ analysis technologies, my project might be feasible in the short span of time that we have (6 more weeks to Week 13).

Visual textures to Sound – WaveTable Synthesis 

I previously explored wavetable synthesis on SuperCollider (one wavetable to another), but I was not able to visualise a combined three-dimensional model of the periodic waveforms with the sound produced (only singular wave forms). For me to design a model for sound textures, I would need to be able to connect the sound I hear to 3D wavetable. I would have to experiment more with sound synthesis to decide on the type and range of sounds I would like to generate. Based on the above paper, I would look at perceptual features of sound textures to formulate how I could associate a tactile quality/ mood to what we hear.

Ableton Live 10: Sound Design with Wavetable

Obtaining visual data/ 2D forms from 3D material – 3D scanning and processing 

To be able to generate data in real-time based on the stone used and its movement, I would need a software to process the captured data (RGB data/ 3D scanning) by the kinect/ camera.

Possible software: 3D Scanner Windows 10 App, OnScale, CocaoKinect

Kinect 3D scanning in OSX with CocoaKinect

http://mskaysartworld.weebly.com/modeling-and-animation-basics.html
https://www.extremetech.com/extreme/236311-researchers-discover-how-to-shape-sound-in-3d

Researchers discover how to shape sound in 3D

I would have to obtain the 2D line vectors/ waves from a 3D scan and use them to compose a 3D wavetable to be played ideally in real-time.

 

Connecting Both – Open Sound Control (OSC)

I have previously worked with OSC to connect processing to SuperCollider, I will continue working on it. I would also like to find a way to design a more complex soundscape on SuperCollider by either inputting the designed wavetable synthesis or individually manipulating the way the sound is played for each sound (pitch, amp, freq, etc).

If the wavetable does not work out:

I would then use data values instead of visualised lines. Referencing Aire by Interspecifics, I would assign individual data line/points in a 3D space to specific designed synthesizers programmed on SuperCollider. Depending on the 3D forms thus there would be varying depth values on the data line, different sounds would be produced and played in consonance/ dissonance.

I would still have to study the soundscape design of Aire on their github to learn more about sound design and layering: https://github.com/interspecifics/Aire

SURPRISE

The concept of surprise is explored in my project through the dissonance between our perceptive modalities, where we usually generalise that they are mutually exclusive when we know they are not. Connecting the links between what we see, hear and touch, I am trying to touch upon the unnoticed relationships between our senses and try to play around with how our minds cognitively make sense of the world.

With sound as an intangible material, we don’t usually connect what we physically see to what we hear. When an instrument is played (eg. a key is pressed, a string is struck), the act serves as a trigger but it is not the only component to the sound we hear. Being human, we can’t fully understand how a sound is visually and technically synthesised beyond what we hear. With technology, we are able to work with the other end of the spectrum of generating sound (where we work from the visual/ components of sound).

Working with textures, I want to associate tactile quality to sound to enhance the experience with a material and its materiality. Stones are intriguing in themselves, even though seemingly fixed, their forms are intrinsically beautiful. Connecting their visual textures to sound seems like a way to explore their spirit.

TIMELINE

Week 9-10: Work on Sound/ wavetable synthesis + Obtaining visual data
Week 11-12: Design soundscape and interaction with stones
Week 13: Set-up

MediaMob – Heliostat Field

Heliostat Field

Using the concept of a heliostat, my flashmob performance explores the idea of tracking the sun’s movement. A heliostat is a device that includes a mirror, which moves to constantly reflect sunlight towards a predetermined target, compensating for the sun’s apparent motions in the sky. The target can be a physical object, distant from the heliostat or a direction in space.

A heliostat field is a solar thermal power plant using computer controlled heliostat systems using data that tracks the sun position to focus the sun’s rays onto a special receiver at the top of the tower. The concentrated solar energy will be used to create steam that will produce electricity.

 

Heliostat SA

Initial Ideation

Building upon my InstructionArt project ‘Choreographic Light’, which uses artificial light to generate movement, I want to explore the movement and reflected position of natural light. My initial idea was to use the behaviours of reflected light (from the sun or a light source) to create ‘light’ drawings onto a surface, where the performers behave as ‘human’ heliostats by moving or flipping a mirrored surface to the scripted direction.

Initial idea

However, for the FlashMob project, I wanted to involve more of the body and bigger movements (ironically, as heliostats can’t move) to create a less static performance. Additionally, due to the restriction in the number of people and material, I was not sure if I could generate an obvious enough effect (of light).

Performance – Heliostat Field

Instead of generating light as an output, I decided on generating sound as a “predetermined target” according to movement and position of the human heliostats. This is also in line with my semester project of generating sound using visual textures.

Moving on to a more conceptual understanding of a heliostat, the performers would still be instructed to track the sun’s movement but in a more abstract manner. Using a website that provides real-time data on the sun’s direction and altitude specific to Singapore (https://www.timeanddate.com/sun/singapore/singapore?month=12), I would create a score for the participant’s movement according to the live changes in the sun’s movement.

Parameters of Performative Space

 

Sun-path chart, equidistant projection, generated by Sun-path Chart Software Courtesy: University of Oregon
This solargraph exposed over the course of a year shows the Sun’s paths of diurnal motion, as viewed from Budapest in 2014. Courtesy: Elekes Andor

Referencing the sun-path chart, the course of the sun movement relative to a location is in the form of a convex lines that varies outwards per time frame (a day). As heliostats, the performers would travel along the paths of the “sun” relative to the position of a specific “location”, which is the kinect connected to the computer to generate sound.

Using string and tape, I mapped out four curves of 10 points around the kinect. The performers would use the points of the web as parameters to navigate the space.

It’s interesting how the shape of the sun paths holds some similarity to the heliostat field.

Increased the sun map to four rows

Score for movement 

Real time data of sun direction and altitude

The participants will be instructed to stand in different points and directions in front of a kinect that is connected to a sound generating system that I have been working on on Supercollider and Processing. According to the changes in real-time data of the sun, the participants have to move according to the score of my designation.

Focusing on the sun direction data, a value and direction is given. Relative the original position (north) of the participant, he/ she has to rotate to face the arrow of the sun direction. The numerical data of both the sun direction and sun altitude changes. Whenever there is a change in values on the screen, the participant moves forward.

Given Instructions

  1. Stand at any point on the map, facing different directions
  2. Use your phone to access website (https://www.timeanddate.com/sun/singapore/singapore?month=10&year=2020)
  3. Rotate your body according the the arrow relative to where you are facing
  4. Move according to the score when the values change.

FlashMob Outcome

The kinect maps the depth positions of the ‘human heliostats’, and plays a sound  whenever a certain depth is reached, generating a  soundscape.
See more on my process of generating sound using real-life movement: (https://oss.adm.ntu.edu.sg/a170141/week-7-generative-sketch/)

The kinect, depending on its position, only tracks the depth and movement of a limited range. Referencing how the sun can only be seen in the day, the kinect does not capture the entire performative area, but the system only interacts when the performers move in front of its range. Whenever the performers enters the captured range of the kinect, a sound would be played according to the change in depth value.

To prevent the sound from cluttering (too many nodes), I reduced the skip value (originally 20) in processing to 60 so that the depth value is extracted for every 60 pixels instead of 20.

For sound generation: Whenever the b value (brightness/depth) obtained is more than a certain value (>140) meaning that forms are captured within the range of the kinect, a message is recorded in Processing. I connected processing with SuperCollider that I could design sounds using the (Synthdef function). Connecting processing to SuperCollider using an external NetAddress, the sound is played whenever the message is sent.

Further developments

In the future, I can consider designing more specific sound and using more data points from the kinect to generate a greater range of sound. I tried using x and y values from the kinect and mapping it to two different sounds but SuperCollider could not handle the overwhelming influx of information and the soundscape was very cluttered and laggy.

I could also vary the sound according the the depth, to map the sound to the individual performers. I would also like to expand the scale of the performative space in a public space and increase the number of participants.

FINAL PERFORMANCE

‘Heliostat Field (2020)
Interactive Performance by Alina Ling

Performed by Jake Tan and Nasya Goh (Thank you! :-))

More Documentation:

 

Generative Sketch and Study Updates

SOUND OF STONES

Project: Creating an instrumental system that uses visual data of textures (stones) to generate sound in real time

For the project, there are two parts:
1. Converting three-dimensional forms into visual data
2. Connecting the data with audio for real-time generation

Generative Sketch

Over week 5, I experimented with the  depth image and raw depth data from kinect to processing to see what kind of visual data (colour/brightness, depth, etc) can be obtained from a camera. The kinect has three key components, infrared projector, infrared camera and RGB camera, from which the captured three-dimensional visual data can be sent to processing.

Using ‘depth image’ 

With the ‘depth image’ sketch, the data values, x (horizontal position) and y (vertical position) of the pixel from the kinect are mapped on recorded image. The sketch involves looking at each pixel (x, y), looking at the index of the depth image and obtaining the colour/ brightness (b – single value between 0 and 255). A rectangle of a fixed size is mapped using z value (depth in 3D space) according to their brightness value b,  where things that are dark to appear close and things that are bright to be farther away.

The purpose of this sketch is to see what data values can be obtained from the kinect and see if I use the data as input for audio generation. From this sketch, the depth data from a kinect that can be obtained are x, y, z, b values, that I think can be used to as input data to map textures of three-dimensional forms.

Using ‘Raw depth data’ to map forms on a point cloud 

For scanning three dimensional forms, raw depth data (kinect1: 0 to 2048 and kinect2: 0 to 4500) from the kinect might be more useful to generate information about textural surfaces in 3D space. The point cloud can be used to map all the points the kinect is obtaining in a 3D space (from the infrared projector).

By giving each pixel an offset value (= x + y * kinect.width), we get a raw depth value d (= depth[offset]). Each PVector point (x, y, d) on the point cloud can be pushed into the three dimensional plane to map the object the kinect is seeing. For smaller objects (I have yet to try this out), a minimum and maximum depth threshold can be used to look only a particular set of pixels, to isolate an object that is close to the kinect.

Tutorial sources:
https://www.youtube.com/watch?v=FBmxc4EyVjs
https://www.youtube.com/watch?v=E1eIg54clGo

 

For sound generation 

Initially, I looked into virtual modular synthesizer program (VCV Rack) to generate the sounds and if they could be coded in real time. However, the programme exists only as a modular synthesizer, a good one, to develop complex audio.

I am interested in sending real-time data from the kinect/ camera/ sensor into an audio-generating software. Referencing Aire CDMX (2016) by Interspecifics, I found out that I could use Python (data access) and SuperCollider (virtual synthesizers) to connect data flow to sounds that I design.

Aire CDMX (2016) by Interspecifics
http://interspecifics.cc/wocon
https://muac.unam.mx/exposicion/aire?lang=en

Aire is a generative sound piece that uses data that environmental sensors pick up in real-time in Mexico City. Using a software written in Python to access the real-time data of pollutants, the data flow is used to animate an assembly set of virtual synthesizers programmed in Supercollider. “In the piece, each one of the pollutants has its own sound identity and the fluctuation of the information modulates all its characteristics.”

From this work, I can study their code as a reference to find a way to map data to designed sounds on Supercollider. As it is my first time working with Python, I might need some help writing the code that specifically works for my project.

Source Code: https://github.com/interspecifics/Aire

For Week 7 Generative Sketch 

For the next two weeks, I will be working on connecting the raw data values from the kinect to virtual synthesizers that I will develop on SuperCollider. My aim is to see what sounds can be generated when a three-dimensional object is captured using a kinect.

Direction for Week 7:
1. Connecting kinect data to Python
Some references: on Linux with pylibfreenect2 https://stackoverflow.com/questions/41241236/vectorizing-the-kinect-real-world-coordinate-processing-algorithm-for-speed
2. Experiment with SuperCollider to create virtual synthesizers
3. Connect Python and SuperCollider to generate sound using data flow

For Final Project Generative Study

For the final project, my goal is to map visual textures of materials, in particularly stones, to generate an auditory perception to the material. Rather than using raw depth data, I would like to obtain more data specific to three-dimensional forms.

Ideation – Using Topography data
https://arsandbox.ucdavis.edu/#sidr

Topography, in geography, is the study of the arrangement of the natural and artificial physical features of an area. When looking at topographic sketches, I wonder if it can be scaled down to map three-dimensional forms by the circumferences formed by intersecting horizontal planes. I would have to research into 3D scanning software and the type of data that can be obtained. How I imagine it would be convert each layer of visual shapes into a corresponding audio feedback (maybe in terms of how the sound wave is shaped/developed).

https://theconversation.com/us/topics/topography-6950

 

Ideation sketch – Converting Visual texture into Audio Visually?

AR Sandbox by UC Davis

The AR Sandbox is an interactive tool combined with 3D visualisation applications to create real-time generation of an elevation color map, topographic contour lines and simulated water when the sand is augmented. I think that the 3D software used to track the changes in forms can be applied to my project, where the contour lines generated by the stones can serve as data or input for sound. I would have to research more into this after I complete the experimentation for the generative study.

I would consider using the sensors that were suggested (RGB+ Clear and IR (facture)) to use for capturing data. I would first work with the kinect, but if the data generated is insufficient or not specific, I would consider other options. I would have to think about where to position the kinect and also use the threshold from the kinect raw depth data tutorial to isolate the captured object.

Other References:
To study texture:
Vija Celmins – photo-realistic nature

Concept development for interaction

Just a possibility:
If the code for capturing real-time visual data is developed enough, I would have the participants collect stones from a walk/ on their path to create generative pieces specific to a journey.
Or it could just exist as a instrumental tool to play around with the sound textures.

Connecting to Performance and Interaction:
I would like to use the developed system on a bigger scale for a performance piece for the semester project for Performance and Interaction class. It would involve capturing human-sized objects in a space on a bigger scale, which would change the threshold of the captured space.

 

InstructionArt – Choreographic Light

Choreographic Light 

Choreographic Light serves as a choreographic system, consisting of a wearable to reflect light directed by laser and geometric ‘scores’, to generate movement.

Using the behaviours of reflected light, the performers are instructed to draw using the point of light reflected from a point on their bodies. Wearing the reflective objects (as costume), the body becomes a tool of interaction with light (directed by a laser). Given geometric ‘scores’ that corresponds to the music, the performers move to create visualised forms with the wearable tool. The movement of the performer is derivative from the interactions with light and body, as well as their own interpretations of the geometric forms.

Wearable as Choreographic Object

To me, performance has a element of orchestration or script, which comes in many forms from basic instruction, physical and spatial settings to social context. A choreography is a form of instruction to generate movement, loosely defined as the sequence of staged steps and movement. A costume or wearable serves as an extension of the body, which affects or controls the way we move. I would like to explore how a wearable system that manipulates an intangible material, such as light, in space can be used to generate movement.

Initial sketches

Using the “costume” to define movement, I intend to explore the body as the tool for interaction. My idea was to re-interpret the space occupied by our body (physical or virtual in the form of light), and see how our body readapts to the restriction and what kind of movements are controlled or generated.

Bauhaus Costumes – Das Triadische Ballet

Costumes by Oskar Schlemmer (Bauhaus) for the Triadic Ballet, at Metropol Theater in Berlin. Photo: Ernst Schneider, 1926. (Apic/Getty Images)

Schlemmer, the choreographer of Bauhaus ballet, intended for the dancers, adorned with geometric costumes, to explore the reinvented silhouettes of their bodies in the avant-garde performance. The costumes serve as metaphysical forms of expression – removing the fluid fabrics and movements, archetypal of ballet, and replace them with structural forms and their interpreted gestures.

 

Sketch – Small prototypes of reflective costumes

For the InstructionArt performance, I wanted to test small prototypes of reflective object, instead of a costume, on different parts of the body to study how the tool would be used (through the restriction of motion). The wearable objects are designed to be easily worn and in different sizes, with the reflective surfaces that can be interchanged.

Reflective objects that can be worn on different parts of the body
Different shapes can be detached and attached

A laser will be directed to the reflective surface, resulting in a point of light on the wall

Using geometric scores as instruction for movement

As light is an intangible material, the fluid interaction of light and movement is affected by many factors, creating unpredictability in the outcomes. As part of the system, I would like to choreograph the interaction (drawing with the body) using simple lines and geometric forms, such as circles, square and triangles, that can be easily understood while using the tool. I would like to keep the instruction as minimal as possible so that it will not be so distracting. It would serve as a directive for the manipulated light, while the interpretation (the size, orientation and speed) would be up to the dancer.

A geometric score of different lines and shapes is composed for the performance

William Forsythe – Improvisation Technologies 

Improvisation Technologies – Forsythe created video segments about his approach to improvisation for modern dance, to train his company’s dancer. His categorisation of different classes of movement (lines, curves, shapes, etc) can be “analysed as geometrically inscriptive – a formal drawing with the body in space”. I was inspired by his systematic approach of providing techniques/ instruction for improvisation (usually with no choreography), the choreography then becomes a combination or sequences of set/ designed motions.


‘Improvisation’ Free-Moving state: Moving according to Sound

During the process of “drawing with light” with our bodies, I wanted to see how the body would respond when there is no geometric instruction. Through the experience of drawing with the body, I realised that the light drawn are restricted to lines or curves due to the small point of contact of the reflective objects. Under no instruction or ‘free-style’,  I had the performers move according to the way they like or with the music. The performers would then using the light to interact with each other, creating patterns and moving with the other with the music, which created a collaborative visual experience.

Future direction – Semester project?

Moving on, I would like to experiment with bigger forms of reflective objects, so that bigger movements can be generated. I would also fixed different points of light instead of having one for one object/ performer, so that a spatial performance or spectacle could be generated.

Feedback:
– Using movement to generate music – Collaborative performance with different roles for each performer
– Different parts of the body – more dynamic movements

Reading Reflection 1

A system, composed of parts working together as a whole, is our way of understanding structures and phenomena in the world. One of the reasons why we are so in awe by nature alludes to the power of creation, subjective to the limitations of the human hands. Nature “evolved from the harmony of the myriad of chances and necessity” and is a constellation of systems that generate forms and phenomena. It is in human culture to break down phenomena into systems, for example, formulating scientific principles to explain how the material world works.

We can take inspiration from the organic “processes found in nature” to condition unpredictability into systems that can generate outcomes infinitely. Generative systems are mechanisms used to create structures that “could not be made by human hands”, using computational models that combine encoded  “organic behaviour and spontaneous irregularities” with logic. These systems come in the form of computer language and digital tools, which are malleable in nature. One small change in the code/ algorithm can lead to a variety of outcomes, or different combinations of written code/ conditional factors can be used to achieve the same or similar outcomes.

The process of using computational systems to generate aesthetic processes involves translating individual components into a series of decisions that can serve as building blocks for the desired result. These decisions have to be interpreted as computer code to satisfy each functional element of the system and the programming process involves a lot of reverse-engineering and ‘trial and error’ to work around the tendencies of the computer algorithms used. The art of coding is the clever manipulation of these generative systems – “choosing computational strategies and appropriate parameters in a combination of technical skill and aesthetic intuition.”

It is interesting to think of Generative Art as works categorised by the strategy of adopting a “certain methodology”, as compared to art movements which are paradigms of works defined by the characteristics or ideology. The aesthetic application of rules and systems is key in generative art works, where the artist designs a closed system by setting conditions and perimeters, in which  the unpredictable outcomes exist. Not all generative art has to be code-based, but the art is in the system instead of the results. This allows the work to be timeless, where as long as the conditions of the system are available, the outcomes can be generated infinitely.

The next step for Generative Art:

Watz states that while computer code algorithms are essentially immaterial, they display material properties. It would be interesting to see the material properties translated into sensory ones, where computer code algorithms can be used to generate sensory experiences. Sensory perception is specific to the person, and what one perceive would a subjective experience of the generated sensory activation, which can be seen as a form of unpredictability and generativity.