Semester Project – ‘STONE’ research and update

Semester Project Proposal

https://docs.google.com/presentation/d/1tElhHiD0AllnlLwou-esg3JFUDa_oYsuHohpyuGDFb4/edit?usp=sharing

Update:

Sonic Development (Generative Art):

Generative Artwork – Stone

Materiality and Space (Performance and Interaction):

‘STONE’ AND ‘SPACE’
1. Arrangement of the human body

How it relates to an experience – eg. staying in a certain body position for a duration of time and what it feels like

Ergonomics 

Human condition 

Physical, Spiritual, Psychological and the Political

“whole entity of a human being: the physical, the spiritual, the psychological and the political” Erwin Wurm in The Artist Who Swallowed the World

Erwin Wurm ‘Disruption’ 2015

2. Arranging of things/ bodies in space

MAIN EXPLORATION: Arrangement of OBJECTS vs HUMANS

Zen Garden 

How are Zen Gardens made and their philosophy
The arrangement of elements and space – and their effect on experience

Human Geography/ Humanistic geography 

Yi-Fu Tian

Human geography – is the branch of geography that deals with humans and their communities, cultures, economies and interactions with the environment by studying their relations with and across locations. It analyses patterns of human social interaction, their interactions with the environment and their spatial interdependencies by application of qualitative and quantitative research methods.

‘Space and Place’ – The Perspective of Experience

Project: Exploring the Arrangement of Humans as Objects (stone)

FYP 20/21 Week 8 Updates

SENSORY EXPERIMENTS

Perceptual box

I am still in the midst of designing my “perceptual box” and fine tuning the details of my sensory experiments.

Referencing:

Yanagisawa, H., & Takatsuji, K. (2015). Effects of visual expectation on perceived tactile perception: An evaluation method of surface texture with expectation effect. International Journal of Design9(1).

Surface Texture

  • People perceive and/or predict a surface’s characteristics corresponding to each physical attribute through sensory information, a process that we call perceived features (eg. Surface roughness perceived through touch).
  • Using a combination of perceived characteristics of surface texture, people perceive a tactile quality, such as “nice to touch”.

Sensory modalities

  • During such sensory modality transitions, we expect or predict the perpetual experience that we might have through a subsequent sensory modality by first using a prior modality, such as in the case of expecting a particular tactile perception by first looking at a surface texture.
  • On the other hand, prior expectation also affect posterior perceptual experiences – a phenomenon known as the expectation effect.
  • 3 aspects of experiment:
  • Participants were asked to evaluate the tactile quality of a target object under three perceptual mode conditions:
    Visual expectation (V), Touch alone (T) and Touch following a visual expectation (VT)
  • To evaluate:
    • Perception of disconfirmation – difference between VT and V
    • Expectation effect – difference between VT and T
    • Perceptual incongruence – different between V and T

Measurement:
Evaluate Tactile feeling using four opposite adjective pairs (“nice to touch-unpleasant to touch”, “smooth-rough”, “hard-soft” and “sticky-slippery”) – Between each adjective pair was a scale comprised of five ranges. Participants responded to each adjective scale by marking the rating on a questionnaire sheet, which employed a semantic differential (SD) scale.

SENSORY EXPERIMENTS 

PART 1
Things I want to explore when I finish constructing the perceptual box. I went back to look at my hypothesis for my project, and I would like to center my experiments through connecting the visual properties of light to the visual qualities of material, and see if a tactile quality can be associated.

  1. Playing with Rhythm
    Light quality: Rhythm?
    Visual quality: PatternTrying to connect how rhythmic/ flashing lights to a tactile effect

  2. Ryoji Ikeda – Test Pattern (2013)
    https://www.youtube.com/watch?v=XwjlYpJCBgk
  3. Playing with Colour
    Light quality: Frequency?
    Visual quality: ColourConnecting a texture (frequency) to colour

    Using touch sensors
  4. Playing with Colour
    Light quality: Intensity/ Diffusion
    Visual quality: Opacity/ TranslucencyTrying to connect force with sharpness of light

 

 

Slide potentiometer

PART 2

MATERIAL PERCEPTION – Can we perceive/ infer material based on visual results of reflected light?

  • Playing around with material’s interaction with light, if light can be given visual forms/ textures
  • Recreate the textures/ find tactile similarities with physical materials

Build a hologram in my perceptual box:

https://mashable.com/2016/10/24/holovect-3d-projections-star-wars/

 

Interesting Light phenomenon/ visual textures I noticed:

Ryoji Ikeda – Spectra (2014)

 

Reminds me of:

Interpretive flare display of unthought thoughts (2020) neugerriemschneider, Berlin Photo: Jens Ziehe

https://olafureliasson.net/archive/artwork/WEK110941/interpretive-flare-display-of-unthought-thoughts

 

‘Touch’ in Art – What would trigger touch?

Last week, it was mentioned that I could look into works that trigger touch, but I missed out on the example that was given.

Vocab? – To find consensus between descriptions of how light is subjectively perceived

ISEA 2020 – Why Sentience?

DATA GLOVES Workshop 17-18 October

The “Data Gloves” were developed with the purpose of interacting with the VR environment “Human After”, a piece by Anni Garza Lau. Under the problem of the high cost of a set of commercial gloves, we realized that we had the ability to manufacture a pair of gloves with a very detailed ability to acquire information about the position of the fingers for a fraction of the price.

https://www.radiancevr.co/artists/anni-garza-lau/

I am not sure how useful the gloves would be in terms of simulating haptic qualities, instead as a glove to obtain data but we will see how it goes.

Creative Industry Report – House of Light by James Turrell

The House of Light, by James Turrell, is located in a hillside forest in Tokamachi, Niigata, Japan. House of Light exists as an experimental work of art that also serves as a guesthouse, where visitors can experience the light with their entire bodies during their stay. Using light as the main medium, which is key in his perceptual works, Turrell combined the intimate light of a traditional Japanese house with his works of light. Both time-specific and site-specific, “House of Light” serves as an experience of living in a Turrell work overnight.

“Outside In” by James Turrell
Image from: https://www.gadventures.com/blog/japan-house-of-light-james-turrell/

For Turrell who has been searching for and exploring the “perception of light”, House of Light is his attempt to contrast and merge day and night, East and West, Tradition and Modernity. One of the two works that can be experienced in the House of Light is “Outside In”, which is a tatami room with a ceiling that can be exposed to the sky using a sliding panel on the roof. As of Turrell’s Skyspaces, which are constructed chambers with an aperture exposing the sky, the work “Outside In” intended for the viewers to be able to “live in the light”.

“Light Bath” by James Turrell

Another work that can be experienced in House of Light is “Light bath”, which explores light in an indoor space, experienced by those who stay overnight at the House of Light. Using optical fibres than run through the bathroom, while being immersed in the tub, the viewer can experience the interplay of light and shadow against the water.

“House of Light” by James Turrell

The House of Light exists in itself as a work of art and also as an accommodation facility where you can interact with the art spaces. Turrell established the terms of the experience where he wanted the guests to interact and discuss their thoughts overnight as they share the world of light and shadow. As a fully functioning guesthouse, it charges 4000 yen (per person, plus a facility charge of 20000 yen for the building, which is divided evenly between guests.

I think it is intriguing that a work of art can function as a business establishment that sells the time-based experience of art. It alters the conventional way of interacting with an artwork, going beyond a short-term immersion in space. Relating to the concept of ‘experience’, the article “Welcome to the Experience economy” explores the emerging fourth economic offering of experience. As consumers unquestionably desire experiences, more and more businesses are responding by explicitly designing and promoting them. 

House of Light serves as an example of this direct relationship, where guests pay for the ephemeral experience. Considering a more philosophical aspect of the experiential perspective, which involves sensation, perception and cognition, how can I apply the novelty of sensory and perceptual experiences to the real world? The idea of using ‘place’ and giving it an experiential perspective seems to have a viable market, but it should be created for a targeted group that understands the work and would like to experience the art as it is intended. 

References

Pine, B. J., & Gilmore, J. H. (1998). Welcome to the experience economy. Harvard business review76, 97-105.

Tuan, Y. F. (1977). Space and place: The perspective of experience. U of Minnesota Press.

 

Generative Artwork – Stone

CONCEPT 

How do materials sound like? While sound is subjectively perceived, there is a collective or general sentiment on the descriptive qualities of a sound/ soundscape. We occasionally use tactile qualities (eg. rough, soft) to describe sounds, to assign textures to what we hear. In music, texture is the way the tempo, melodic and harmonic materials are combined in a composition, thus determining the overall quality of the sound in a piece. However, most sounds that we hear are more complex than simple harmonies, there is a more complicated process behind how we can perceive and cognitively recognised textures in sound. My project explores how visual textures of physical materials (stones) can be translated to the auditory, creating an interactive system to draw out the direct link between sound and textures.

CASE STUDY – Classification of Sound Textures

Saint-Arnaud, N. (1995). Classification of sound textures (Doctoral dissertation, Massachusetts Institute of Technology).

http://alumni.media.mit.edu/~nsa/SoundTextures/

“The sound of rain, or crowds, or even copy machines all have a distinct temporal pattern which is best described as a sound texture.” (Saint-Arnaud, 1995). Sounds with such constant long term characteristics are classified as Sound Textures. The thesis investigates the aspects of the human perception and machine processing of sound textures.

“Between 1993 and 1995, I studied sound textures at the MIT Media Lab. First I explored sound textures from a human perception point of view, performing similarity and grouping experiments, and looking at the features used by subjects to compare sound textures.

Second, from a machine point of view, I developed a restricted model of sound textures as two-level phenomena: simple sound elements called atoms form the low level, and the distribution and arrangement of atoms form the high level. After extracting the sound atoms from a texture, my system used a cluster-based probability model to characterize the high level of sound textures. The model is then used to classify and resynthesize textures. Finally, I draw parallels with the perceptual features of sound textures explored earlier, and with visual textures.”

His approach to sound textures from both the perspectives of subjective human perception and technical synthesis and processing of sound textures is especially relevant to my project. The difference is that the thesis explores the perceptual and machine constructs of existing sound textures, while I am trying to generate sounds that could be perceived/ associated with the actual tactile qualities of a material. I could say that my process (associating texture to sound) works in the opposite direction from his (associating sound to texture).

Aiming to extract a sound from the visual textures of stone involves two aspects:
1. Subtracting three-dimensional forms of the material into visual data that can be converted into audio/ sounds
2. Designing a model that generates sounds that the audience would perceive/ associate the tactile qualities of the specific material.

PROCESS

While his project is conducted on a large scale with the abundance of time and resources (MIT lab), it was conducted a while back (1993-1995). With the introduction of many audio-generation software and sound visualisation/ analysis technologies, my project might be feasible in the short span of time that we have (6 more weeks to Week 13).

Visual textures to Sound – WaveTable Synthesis 

I previously explored wavetable synthesis on SuperCollider (one wavetable to another), but I was not able to visualise a combined three-dimensional model of the periodic waveforms with the sound produced (only singular wave forms). For me to design a model for sound textures, I would need to be able to connect the sound I hear to 3D wavetable. I would have to experiment more with sound synthesis to decide on the type and range of sounds I would like to generate. Based on the above paper, I would look at perceptual features of sound textures to formulate how I could associate a tactile quality/ mood to what we hear.

Ableton Live 10: Sound Design with Wavetable

Obtaining visual data/ 2D forms from 3D material – 3D scanning and processing 

To be able to generate data in real-time based on the stone used and its movement, I would need a software to process the captured data (RGB data/ 3D scanning) by the kinect/ camera.

Possible software: 3D Scanner Windows 10 App, OnScale, CocaoKinect

Kinect 3D scanning in OSX with CocoaKinect

http://mskaysartworld.weebly.com/modeling-and-animation-basics.html
https://www.extremetech.com/extreme/236311-researchers-discover-how-to-shape-sound-in-3d

Researchers discover how to shape sound in 3D

I would have to obtain the 2D line vectors/ waves from a 3D scan and use them to compose a 3D wavetable to be played ideally in real-time.

 

Connecting Both – Open Sound Control (OSC)

I have previously worked with OSC to connect processing to SuperCollider, I will continue working on it. I would also like to find a way to design a more complex soundscape on SuperCollider by either inputting the designed wavetable synthesis or individually manipulating the way the sound is played for each sound (pitch, amp, freq, etc).

If the wavetable does not work out:

I would then use data values instead of visualised lines. Referencing Aire by Interspecifics, I would assign individual data line/points in a 3D space to specific designed synthesizers programmed on SuperCollider. Depending on the 3D forms thus there would be varying depth values on the data line, different sounds would be produced and played in consonance/ dissonance.

I would still have to study the soundscape design of Aire on their github to learn more about sound design and layering: https://github.com/interspecifics/Aire

SURPRISE

The concept of surprise is explored in my project through the dissonance between our perceptive modalities, where we usually generalise that they are mutually exclusive when we know they are not. Connecting the links between what we see, hear and touch, I am trying to touch upon the unnoticed relationships between our senses and try to play around with how our minds cognitively make sense of the world.

With sound as an intangible material, we don’t usually connect what we physically see to what we hear. When an instrument is played (eg. a key is pressed, a string is struck), the act serves as a trigger but it is not the only component to the sound we hear. Being human, we can’t fully understand how a sound is visually and technically synthesised beyond what we hear. With technology, we are able to work with the other end of the spectrum of generating sound (where we work from the visual/ components of sound).

Working with textures, I want to associate tactile quality to sound to enhance the experience with a material and its materiality. Stones are intriguing in themselves, even though seemingly fixed, their forms are intrinsically beautiful. Connecting their visual textures to sound seems like a way to explore their spirit.

TIMELINE

Week 9-10: Work on Sound/ wavetable synthesis + Obtaining visual data
Week 11-12: Design soundscape and interaction with stones
Week 13: Set-up

MediaMob – Heliostat Field

Heliostat Field

Using the concept of a heliostat, my flashmob performance explores the idea of tracking the sun’s movement. A heliostat is a device that includes a mirror, which moves to constantly reflect sunlight towards a predetermined target, compensating for the sun’s apparent motions in the sky. The target can be a physical object, distant from the heliostat or a direction in space.

A heliostat field is a solar thermal power plant using computer controlled heliostat systems using data that tracks the sun position to focus the sun’s rays onto a special receiver at the top of the tower. The concentrated solar energy will be used to create steam that will produce electricity.

 

Heliostat SA

Initial Ideation

Building upon my InstructionArt project ‘Choreographic Light’, which uses artificial light to generate movement, I want to explore the movement and reflected position of natural light. My initial idea was to use the behaviours of reflected light (from the sun or a light source) to create ‘light’ drawings onto a surface, where the performers behave as ‘human’ heliostats by moving or flipping a mirrored surface to the scripted direction.

Initial idea

However, for the FlashMob project, I wanted to involve more of the body and bigger movements (ironically, as heliostats can’t move) to create a less static performance. Additionally, due to the restriction in the number of people and material, I was not sure if I could generate an obvious enough effect (of light).

Performance – Heliostat Field

Instead of generating light as an output, I decided on generating sound as a “predetermined target” according to movement and position of the human heliostats. This is also in line with my semester project of generating sound using visual textures.

Moving on to a more conceptual understanding of a heliostat, the performers would still be instructed to track the sun’s movement but in a more abstract manner. Using a website that provides real-time data on the sun’s direction and altitude specific to Singapore (https://www.timeanddate.com/sun/singapore/singapore?month=12), I would create a score for the participant’s movement according to the live changes in the sun’s movement.

Parameters of Performative Space

 

Sun-path chart, equidistant projection, generated by Sun-path Chart Software Courtesy: University of Oregon
This solargraph exposed over the course of a year shows the Sun’s paths of diurnal motion, as viewed from Budapest in 2014. Courtesy: Elekes Andor

Referencing the sun-path chart, the course of the sun movement relative to a location is in the form of a convex lines that varies outwards per time frame (a day). As heliostats, the performers would travel along the paths of the “sun” relative to the position of a specific “location”, which is the kinect connected to the computer to generate sound.

Using string and tape, I mapped out four curves of 10 points around the kinect. The performers would use the points of the web as parameters to navigate the space.

It’s interesting how the shape of the sun paths holds some similarity to the heliostat field.

Increased the sun map to four rows

Score for movement 

Real time data of sun direction and altitude

The participants will be instructed to stand in different points and directions in front of a kinect that is connected to a sound generating system that I have been working on on Supercollider and Processing. According to the changes in real-time data of the sun, the participants have to move according to the score of my designation.

Focusing on the sun direction data, a value and direction is given. Relative the original position (north) of the participant, he/ she has to rotate to face the arrow of the sun direction. The numerical data of both the sun direction and sun altitude changes. Whenever there is a change in values on the screen, the participant moves forward.

Given Instructions

  1. Stand at any point on the map, facing different directions
  2. Use your phone to access website (https://www.timeanddate.com/sun/singapore/singapore?month=10&year=2020)
  3. Rotate your body according the the arrow relative to where you are facing
  4. Move according to the score when the values change.

FlashMob Outcome

The kinect maps the depth positions of the ‘human heliostats’, and plays a sound  whenever a certain depth is reached, generating a  soundscape.
See more on my process of generating sound using real-life movement: (https://oss.adm.ntu.edu.sg/a170141/week-7-generative-sketch/)

The kinect, depending on its position, only tracks the depth and movement of a limited range. Referencing how the sun can only be seen in the day, the kinect does not capture the entire performative area, but the system only interacts when the performers move in front of its range. Whenever the performers enters the captured range of the kinect, a sound would be played according to the change in depth value.

To prevent the sound from cluttering (too many nodes), I reduced the skip value (originally 20) in processing to 60 so that the depth value is extracted for every 60 pixels instead of 20.

For sound generation: Whenever the b value (brightness/depth) obtained is more than a certain value (>140) meaning that forms are captured within the range of the kinect, a message is recorded in Processing. I connected processing with SuperCollider that I could design sounds using the (Synthdef function). Connecting processing to SuperCollider using an external NetAddress, the sound is played whenever the message is sent.

Further developments

In the future, I can consider designing more specific sound and using more data points from the kinect to generate a greater range of sound. I tried using x and y values from the kinect and mapping it to two different sounds but SuperCollider could not handle the overwhelming influx of information and the soundscape was very cluttered and laggy.

I could also vary the sound according the the depth, to map the sound to the individual performers. I would also like to expand the scale of the performative space in a public space and increase the number of participants.

FINAL PERFORMANCE

‘Heliostat Field (2020)
Interactive Performance by Alina Ling

Performed by Jake Tan and Nasya Goh (Thank you! :-))

More Documentation:

 

Week 7: Generative Study

SOUND OF STONES

Generative Study:
Real-time sound generation using depth data from kinect

Over Week 7, I experimented with SuperCollider, a platform for real-time audio synthesis and algorithmic composition, which supports live coding, visualisation, external connections to software/hardware, etc. On the real-time audio server, I wanted to experiment with unit generators (UGens) for sound analysis, synthesis and processing to study the components of sound through the programming language and visually (Wavetable). The physical modelling functions (plot, frequency analyser, stethoscope, s.meter, etc) would allow me to explore the visual components of sound.

Goals 

  1. Connecting Kinect to processing to obtain visual data (x, y, z/b)
  2. Create and experiment with virtual synthesizers (SynthDef function) in SuperCollider + visualisation
  3. Connect Processing and SuperCollider, send data from Kinect to generate sound (using Open Sound Control OSC)

Initially, I was going to work with Python to generate data flow into SuperCollider but Processing would be more suitable for smaller sets of data.

SuperCollider

SynthDef() is an object which allows us to create a function (design of a sound), and the sound generation has to been run using the Synth.new() or .play line eg. x = Synth.new(\pulseTest);. This allows us to create different types of sound in the SynthDef(/name) function and the server allows us to play and combine the different sounds live (live-coding).

Interesting functions:
Within SuperCollider, there are interesting variables that can be used to generate sound other than the .play function. This would be relevant to my project where I would like to generate the sounds or the design of the sound using external data from the kinect. The functions I worked with include MouseX/Y (where the sound varies based on the position of the mouse), Wavetable synthesis and Wave Shaper (where the input signals are shaped using wave functions) and Open Sound Control (OSCFunc which allows SuperCollider to receive data from external NetAddr.).

MouseX

Multi-Wave Shaper (Wavetable Synthesis)

SuperCollider Tutorials (really amazing ones) by Eli Fieldsteel
Link: https://youtu.be/yRzsOOiJ_p4

https://funprogramming.org/138-Processing-talks-to-SuperCollider-via-OSC.html

Processing + SuperCollider 

My previous experimentation involves obtaining depth data (x, y, z/b) from the kinect and processing the data into three-dimensional greyscale visuals on Processing. The depth value z is used to generate a brightness value, b, from white to black (0 to 255) which is reflected in each square of every 20 pixels (skip value). To experiment with using real-time data to generate sound, I thought the brightness data, b, generated in the processing sketch would make a good data point.

Connecting Processing to SuperCollider 

Using the OscP5 library in Processing, we input data into a NetAdress that will be redirected into SuperCollider.

Using the brightness value, when the b value is more than 100, a message ‘/brightness’ is sent to OSC SuperCollider.

When the OSCdef() function is running in SuperCollider, and you receive ‘-> OSCdef(brightness, /brightness, nil, nil, nil)’ in the Post window, it means that it is open to receiving data from Processing. After running the processing sketch, whenever the message ‘/brightness’ is received, the Synth(‘stone1’) will be played.

Generative Sketch – Connecting depth data with sound

For the purpose of the generative sketch, I am working with using data as a trigger for the sound that has been pre-determined in the SynthDef function. Different SynthDef functions can be coded for different sounds. So far, the interaction between kinect and the sound generation is time-based, where movement generates the beats of a single sound. For larger range of sounds specific to the captured visual data, and thus textures, I would have to consider using the received data values within the design of each sound synthesizer.

Improvements

I see the generative sketch as a means to the end, so by no means does it serve as a final iteration. It was a good experiment for me to explore the SuperCollider platform which is new to me, and I was able to understand the workings of audio a little better. I would have to work more on the specifics of the sound design, playing with its component, making it more specific to the material.

Further direction and Application 

Further experiments would be to use more data values (x, y, z/b) beyond the sound generation (Synth();) but be used in the design of the sound (Synthdef();). A possible development is to use Wave Shaper function to generate sounds specific to the Wavetable generated using functions that are manipulated or transformed using the real-time data from kinect.

Topography – 3D scanning of forms (stones)
Developing a soundscape using WaveTable synthesis

 

Generative Sound and Soundscape Design

I would like to use the pure depth data of three-dimensional forms to map the individual soundscape (synthesizer) of each sound, so that the sound generated would be specific to the material. This relates to my concept of translating the materiality into a sound, where the textures of the stone correspond to a certain sound. So, if the stone is unmoved under the camera, an unchanging loop of a specific sound will be generated. When different stones are placed under the camera, the sounds would be layered to create a composition.

In terms of instrumentation and interaction, I can also use time-based data (motion, distance, etc) to change different aspects of sound (frequency, amplitude, rhythm, etc). The soundscape would then change when the stones are moved.

Steps for Generative Study:

I have yet to establish a threshold on the kinect to isolate smaller objects and get more data specific to the visual textures of materials. I might have to explore more 3D scanning programs that would allow me to extract information specific to three-dimensionality.

My next step would be to connect more data points from processing to Supercollider and try to create more specific arguments in SynthDef(). After which, I would connect my Pointcloud sketch to Supercollider where I might be able create more detailed sound generation specific to 3D space.


Link to Performance and Interaction:

Proposal for Performance and Interaction class:
https://drive.google.com/file/d/1U5J0XajPlCrGfuhPQEI6J1zQDqRu2tJL/view?usp=sharing

Lee Ufan, Relatum- L’ombre des étoiles, 2014

 

Audio Set up for Mac (SuperCollider):

Audio MIDI Setup

 

Semester Project Proposal – Stone

Proposal for Performance & Interaction Semester Project
https://drive.google.com/file/d/1U5J0XajPlCrGfuhPQEI6J1zQDqRu2tJL/view?usp=sharing

My definition of ‘ING A THING’
‘Materialising an experience and Experiencing a material’

Feedback:
Look deeper into ‘What it means to be a stone?” and Extracting the essence of the material
Go beyond the representational, see what abstract connections can be revealed between the material and experience

Generative Art Project:
Proposal:
https://oss.adm.ntu.edu.sg/a170141/generative-study-generating-sound-using-materiality/
Experimentation/ Execution: https://oss.adm.ntu.edu.sg/a170141/generative-sketch-and-study-updates/

Generative Art Reading 2

Amplifying The Uncanny

Analysing the methodology and applications of Machine Learning and Generative Adversarial Networks (GAN) framework.

Computational tools and techniques, such as machine learning and GAN, are definitive to the applications of such technology for a generative purpose. The paper explores the exploitations of these deep generative models in the production of artificial images of human faces (deepfakes) and in turn invert its “objective function” and turn the process of creating human likeness to that of human unlikeness. The author highlighted the concept of “The Uncanny Valley”, introduced by roboticist and researcher Masahiro Mori, which theorises the dip in feelings of familiarity or comfort when increasing human likeness of artificial forms reaches a certain point. Using the idea of “the uncanny”, Being foiled maximises human unlikelihood by programming the optimisation towards producing images based on what the machine predicts are fake.

Methodology 

Machine Learning uses the process of optimisation (the best outcome) to solve a pre-defined objective function. The algorithms used to process data produce parameters that categorise what can be generated (by the choice of function). In producing deepfakes through the GAN framework, the generator serve to produce random samples and the discriminator is optimised to classify real data as being real and generated data as being fake, where the generator is trained to fool the discriminator.

Being foiled uses the parameters generated by the discriminator which predicts signs that the image is fake to change the highly realistic samples produced by the generator. It reverses the process of generating likelihood to pin-point at which point do we cognitively recognise a human face to be unreal, which relates to a visceral feeling of dissonance (the uncanny valley). When the system generates abstraction, where images cannot be cognitively recognised, I would imagine that the feelings of discomfort dissipates. In a way, Being Foiled studies the “unexplainable” phenomena of human understanding and feelings.

Applications

As a study, I feel that the generative piece serves its purpose of introspective visual representations of uncanniness. However, the work should exist as more than  “aesthetic outcomes” and the learning can be applied to various  fields, such as AI and human robotics, that develop and explore human likeness and machines.

The Artificial Intelligence field is quite advanced in the development of intelligent technology and computers that mimic human behaviour and thinking, threading the fine line of what is living and what is machine. Considering the analogue forms of art, Hyperrealism saw artists and sculptors, such as Duane Hanson and Ron Mueck, recreating human forms in such detail that it is hard to differentiate which is real and unreal visually. When it comes to robotics and artificial intelligence, what defines it to be “human” is the responses that are produced by the human mind and body. By studying the data collected on “normal” human behaviour, the AI systems generate responses trained to be human-like. “The Uncanny Valley” explores the threshold of human tolerance for non-human forms, where imitation no longer feels like imitation, which is often referenced in the field. With Being Foiled,  the point where uncanniness starts to develop visually can be tracked and the information can be used when developing these non-human forms.

Geminoid HI by Hiroshi Ishiguro Photo: Osaka University/ATR/Kokoro

Where “Being Foiled” can be applied

When I was in KTH in Stockholm, I was introduced and had the experience of using and interacting with an artifical intelligence robot developed by the university. Furhat (https://furhatrobotics.com/) is a “social robot with human-like expressions and advanced conversational artificial intelligence (AI) capabilities.” He/ She is able to communicate with us humans as we do with each other – by speaking, listening, showing emotions and maintaining eye contact. The computer interfaces combines a three-dimensional screen to project human-like faces, which can be swapped according to the robot’s identity and intended function. Furhat constantly monitors the faces (their position and expressions) of people in front of it, making it responsive to the environment or the people it is talking to.

Article on Furhat:
https://newatlas.com/furhat-robotics-social-communication-robot/57118/
“The system seems to avoid slipping into uncanny valley territory by not trying to explicitly resemble the physical texture of a human face. Instead, it can offer an interesting simulacrum of a face that interacts in real-time with humans. This offers an interesting middle-ground between alien robot faces and clunky attempts to resemble human heads using latex and mechanical servos.”
When interacting with the Furhat humanoid, personally I did not experience any feelings of discomfort and it seemed to have escape the phenomenon of “the uncanny valley”. It even seemed friendly and have a personality.

It is interesting to think that a machine could have a “personality” and the concept of ‘the uncanny valley’ was brought up when I was learning about the system. What came to my mind was at which point of  likeness to human intelligence would the system reach the uncanny valley (discomfort) beyond just our response to the visuals of human likeness. Can we use the machine learning technique that predicts what is fake or what is real on images (facial expressions) for actual human behaviours (which is connected to facial behaviour in the Furhat system)? -> how I would apply the algorithm/ technique explored in the paper

An interesting idea:
Projecting the “distorted” faces on the humanoid to explore the feelings of dissonance when interacting with the AI system

The many faces of Furhat. Image from: Furhat Robotics

Conclusion

While generative art cleverly makes use of machine learning techniques to generate outcomes that serve objective functions, the produced outcomes are very introspective in nature. The outcomes should go beyond the aesthetic, where the concept can be applied in very interesting ways with artificial intelligence and what it means to be human.

Generative Sketch and Study Updates

SOUND OF STONES

Project: Creating an instrumental system that uses visual data of textures (stones) to generate sound in real time

For the project, there are two parts:
1. Converting three-dimensional forms into visual data
2. Connecting the data with audio for real-time generation

Generative Sketch

Over week 5, I experimented with the  depth image and raw depth data from kinect to processing to see what kind of visual data (colour/brightness, depth, etc) can be obtained from a camera. The kinect has three key components, infrared projector, infrared camera and RGB camera, from which the captured three-dimensional visual data can be sent to processing.

Using ‘depth image’ 

With the ‘depth image’ sketch, the data values, x (horizontal position) and y (vertical position) of the pixel from the kinect are mapped on recorded image. The sketch involves looking at each pixel (x, y), looking at the index of the depth image and obtaining the colour/ brightness (b – single value between 0 and 255). A rectangle of a fixed size is mapped using z value (depth in 3D space) according to their brightness value b,  where things that are dark to appear close and things that are bright to be farther away.

The purpose of this sketch is to see what data values can be obtained from the kinect and see if I use the data as input for audio generation. From this sketch, the depth data from a kinect that can be obtained are x, y, z, b values, that I think can be used to as input data to map textures of three-dimensional forms.

Using ‘Raw depth data’ to map forms on a point cloud 

For scanning three dimensional forms, raw depth data (kinect1: 0 to 2048 and kinect2: 0 to 4500) from the kinect might be more useful to generate information about textural surfaces in 3D space. The point cloud can be used to map all the points the kinect is obtaining in a 3D space (from the infrared projector).

By giving each pixel an offset value (= x + y * kinect.width), we get a raw depth value d (= depth[offset]). Each PVector point (x, y, d) on the point cloud can be pushed into the three dimensional plane to map the object the kinect is seeing. For smaller objects (I have yet to try this out), a minimum and maximum depth threshold can be used to look only a particular set of pixels, to isolate an object that is close to the kinect.

Tutorial sources:
https://www.youtube.com/watch?v=FBmxc4EyVjs
https://www.youtube.com/watch?v=E1eIg54clGo

 

For sound generation 

Initially, I looked into virtual modular synthesizer program (VCV Rack) to generate the sounds and if they could be coded in real time. However, the programme exists only as a modular synthesizer, a good one, to develop complex audio.

I am interested in sending real-time data from the kinect/ camera/ sensor into an audio-generating software. Referencing Aire CDMX (2016) by Interspecifics, I found out that I could use Python (data access) and SuperCollider (virtual synthesizers) to connect data flow to sounds that I design.

Aire CDMX (2016) by Interspecifics
http://interspecifics.cc/wocon
https://muac.unam.mx/exposicion/aire?lang=en

Aire is a generative sound piece that uses data that environmental sensors pick up in real-time in Mexico City. Using a software written in Python to access the real-time data of pollutants, the data flow is used to animate an assembly set of virtual synthesizers programmed in Supercollider. “In the piece, each one of the pollutants has its own sound identity and the fluctuation of the information modulates all its characteristics.”

From this work, I can study their code as a reference to find a way to map data to designed sounds on Supercollider. As it is my first time working with Python, I might need some help writing the code that specifically works for my project.

Source Code: https://github.com/interspecifics/Aire

For Week 7 Generative Sketch 

For the next two weeks, I will be working on connecting the raw data values from the kinect to virtual synthesizers that I will develop on SuperCollider. My aim is to see what sounds can be generated when a three-dimensional object is captured using a kinect.

Direction for Week 7:
1. Connecting kinect data to Python
Some references: on Linux with pylibfreenect2 https://stackoverflow.com/questions/41241236/vectorizing-the-kinect-real-world-coordinate-processing-algorithm-for-speed
2. Experiment with SuperCollider to create virtual synthesizers
3. Connect Python and SuperCollider to generate sound using data flow

For Final Project Generative Study

For the final project, my goal is to map visual textures of materials, in particularly stones, to generate an auditory perception to the material. Rather than using raw depth data, I would like to obtain more data specific to three-dimensional forms.

Ideation – Using Topography data
https://arsandbox.ucdavis.edu/#sidr

Topography, in geography, is the study of the arrangement of the natural and artificial physical features of an area. When looking at topographic sketches, I wonder if it can be scaled down to map three-dimensional forms by the circumferences formed by intersecting horizontal planes. I would have to research into 3D scanning software and the type of data that can be obtained. How I imagine it would be convert each layer of visual shapes into a corresponding audio feedback (maybe in terms of how the sound wave is shaped/developed).

https://theconversation.com/us/topics/topography-6950

 

Ideation sketch – Converting Visual texture into Audio Visually?

AR Sandbox by UC Davis

The AR Sandbox is an interactive tool combined with 3D visualisation applications to create real-time generation of an elevation color map, topographic contour lines and simulated water when the sand is augmented. I think that the 3D software used to track the changes in forms can be applied to my project, where the contour lines generated by the stones can serve as data or input for sound. I would have to research more into this after I complete the experimentation for the generative study.

I would consider using the sensors that were suggested (RGB+ Clear and IR (facture)) to use for capturing data. I would first work with the kinect, but if the data generated is insufficient or not specific, I would consider other options. I would have to think about where to position the kinect and also use the threshold from the kinect raw depth data tutorial to isolate the captured object.

Other References:
To study texture:
Vija Celmins – photo-realistic nature

Concept development for interaction

Just a possibility:
If the code for capturing real-time visual data is developed enough, I would have the participants collect stones from a walk/ on their path to create generative pieces specific to a journey.
Or it could just exist as a instrumental tool to play around with the sound textures.

Connecting to Performance and Interaction:
I would like to use the developed system on a bigger scale for a performance piece for the semester project for Performance and Interaction class. It would involve capturing human-sized objects in a space on a bigger scale, which would change the threshold of the captured space.

 

FYP 20/21 Week 5 Project Updates

For Week 5 and 6, I intend to start on experimentation for my project. My first step for crafting the sensory experiments would be to determine what visual qualities of light I would like and what kinds of light can be used as tools for exploration.

Readings:
Light
The Lighting Art: The Aesthetics of Stage Lighting Design by Richard H. Palmer
– Chapter 2: Psychophysical Considerations: Light, the Eye, the Brain and Brightness
– Chapter 3: Psychophysical Considerations: Color
– Chapter 4: Psychophysical Considerations: Space and Form Perception

Handbook of Experimental Phenomenology: Visual Perception of Shape, Space and Appearance by Liliana Albertazzi
– Chapter 6: Surface Shape, the Science and the Looks
– Chapter 8: Spatial and Form-Giving qualities of light

Material Perception (Visual and Haptic)

Neural Mechanisms of Material Perception: Quest on Shitsukan / Hidehiko Komatsu  and Naokazu Goda

Visual perception of materials and their properties/ Roland W. Fleming

Visual and Haptic Representations of Material Properties  / Elisabeth Baumgartner, Christiane B. Wiebel and Karl R. Gegenfurtner

Multisensory Texture Perception / Roberta L. Klatzky

Effects of Visual Expectation on Perceived Tactile Perception: An Evaluation Method of Surface Texture with Expectation Effect / Hideyoshi Yanagisawa, Kenji Takatsuji

DEVELOPMENT 

The main goal of my sensory experiments would be to see if perceived tactile perception can be induced when visually perceiving or interacting with light. My experiments would center around connecting visual qualities of light to those of materials.

I would have to do further studies into visual and textile of materials through the readings, and try to formulate more concrete experiments with light using those that were conducted with materials. I plan to conduct at least two simple experiments in the following week before the presentation on Friday (week 6).

Some concepts/ ideas I am working with:
– Television static and the sensation of uncomfortable tingling or prickling, ‘pins and needles’
– Diffused (soft) vs. Sharp (hard) light, associating a tactile quality to light quality
> If a ray of hard light (laser, projection, etc) comes towards you, would you avoid it with your body?
– Studying motion/ rhythm of light (black and white) and sensory and neural conditions (epilepsy, hypnosis, vertigo, dizziness)
> Replacing objects with light?
– Optical illusions with light? Translating 2D to 3D with projected/ physical light?
> Akiyoshi Kitaoka http://www.ritsumei.ac.jp/~akitaoka/index-e.html
– Squinting at light > diffraction effect

UPDATES (Sensory experiments)
– Will update the type and specifications of the small sensory experiments that I intend to do over the week here