Reading Assignment 3 – “Framework for Understanding Generative Art”

Dorin, Alan, Jonathan McCabe, Jon McCormack, Gordon Monro, and Mitchell Whitelaw. “A framework for understanding generative art.” Digital Creativity 23, no. 3-4 (2012): 239-259.

Aim of the Paper 

In the process of analysing and categorising generative artworks, the critical  structures of traditional art do not seem to be applicable to “process based works”. The authors of the paper devised a new framework to deconstruct and classify generative systems by their components and their characteristics. By breaking down the generative processes into defining components – ‘entities (initialisation, termination)’, ‘processes’, ‘environmental interaction’ and ‘sensory outcomes’ – we are able to critically characterise and compare generative artworks which underlying generative processes, as compared to outcomes, hold points of similarity.

The paper first looked at different attempts at and previous approaches of classifying generative art. By highlighting the ‘process taxonomies’ of different disciplines which adopt the “perspective of processes”, the authors used a ‘reductive approach’ to direct their own framework for the field of generative art in particular. Generative perspectives and paradigms began to emerge in the various seemingly unrelated disciplines, such as biology, kinetic art and time-based arts and computer science, adopting algorithmic processes or parametric strategies to generate actions or outcomes. Previous studies explored specific criterions of emerging generative systems by “employing a hierarchy, … simultaneously facilitate high-level and low-level descriptions, thereby allowing for recognition of abstract similarities and differentiation between a variety of specific patterns” (p. 6, para.4). In developing the critical framework for generative art, the authors took into consideration of the “natural ontology” of the work, selecting a level of description that is appropriate for the nature. Adopting “natural language descriptions and definitions”, the framework aims to serve as a way to systematically organise and describe a range of creative works based on their generativity.

Characteristics of ‘Generative Art System’ 

Generative art systems can be broken down into four
(seemingly) chronological components – Entities, Processes, Environmental Interaction and Sensory Outcomes. As generative art are not characterised by the mediums of their outcomes, the structures of comparison lie in the approach and construction of the system.

All generative systems contain independent actors ‘Entities’ whose behaviour is mostly dependent on the mechanism of change ‘Processes’ designed by the artist. The behaviours of the entities, in digital or physical forms, may be autonomous to a certain extent decided by the artist and determined by their own properties. For example, Sandbox (2009)  by artist couple Erwin Driessens and Maria Verstappen, is a diorama of a terrain of sand is continuously manipulating by a software system that controls the wind. The paper highlight how each grain of sand can be considered the primary entities in this generative system and how the system behaves as a whole is dependent on the physical properties of the material itself. The choice of entity would have an effect on the system, such as in this particular work where the properties of sand (position, velocity, mass and friction) would have an effect on the behaviour of the system. I think the nature of the chosen entities of a system is important factor, especially when it comes to generative artworks that use physical materials.

The entities and algorithms of change upon them also exist within a “wider environment from which they may draw information or input upon which to act” (p. 10, para. 3). The information flow between the generation processes and their operating environment can be classified as ‘Environmental Interaction’, where incoming information from external factors (human interaction or artist manipulation) can set or change the parameters during execution which leads to different sets of outcomes. These interactions can be characterised by “their frequency (how often they occur), range (the range of possible interactions or amount of information conveyed) and significance (the impact of the information acquired from the interaction of the generative system)”. The framework also classifies interactions as “higher-order” when it involves the artist or designer in the work, where he can manipulate the results of the system through the intermediate generative process or adjusting the parameters or the system itself in real-time, “based on ongoing observation and evaluation of its outputs”. The higher order interactions are made based on feedback of the generated results, which hold similarities to machine learning techniques or self-informing systems. This process results in changes to its entities, interactions and outcomes and can be characterised as “filtering”. This high order interaction is prevalent in the activity of live-coding, relevant to audio generation softwares which supports live coding such as SuperCollider, Sonic Pi, etc, where performance/ outcome tweaking is the main creative input.

The last component of generative art systems is the ‘Sensory outcomes’ and they can be evaluated based on “their relationship to perception, process and entities.” The generated outcomes could be perceived sensorially or interpreted cognitively as they are produced in different static or time-based forms (visual, sonic, musical, literary, sculptural etc). When the outcomes seems unclassifiable, they can be made sense of through a process of mapping where the artist decides on how the entities and processes of the system can be transformed into “perceptible outcomes”. “A natural mapping is one where the structure of entities, process and outcome are closely aligned.”

Case studies of generative artworks

The Great Learning, Paragraph 7 – Cornelius Cardew (1971)

Paragraph 7 is a self-organising choral work performed using a written “score” of instructions. The “agent-based, distributed model of self-organisation” produces musically varying outcomes within the same recognisable system, while it is dependent on human entities and there is room for interpretation/ error in the instructions, similarities with Reynolds’ flocking system can be observed.

Tree Drawings – Tim Knowles (2005)

Using natural phenomena and materials of ‘nature’, the movement of wind-blown branches to create drawings on canvas. The found process of natural wind to be used as the generator of movement of the entities highlights the point of the effect of physical properties of chosen materials and of the environment. “The resilience of the timber, the weight and other physical properties of the branch have significant effect on the drawings produced. Different species of tree produce visually discernible drawings.”

The element of surprise is included in the work, where the system is highly autonomous, where the artist involvement includes the choice of location and trees as well as the duration. It brings to mind the concept of “agency” in art, is agency still relevant in producing outcomes in generative art systems? Or is there a shift in the role of the artist when it comes to generative art?

 

Tree Drawings, Tim Knowles

The Framework on my Generative Artwork ‘SOUNDS OF STONE’ 

Point-cloud visualisation system for “Stone”

Visual system:

Work Details
‘Sounds of Stone’ (2020) – Generative visual and audio system

Entities
Visual: Stones, Points
Audio: Stones, Data-points, Virtual synthesizers

Initialisation// Termination:
Initialisation and termination determined by human interaction (by placing and removing a stone within the boundary of the system)

Processes
Visual and sound states change through placement and movement of stones
Each ‘stone’ entity performs a sound, where each sound corresponds to its visual texture (Artist-defined process)
Combination of outcomes depending on the number of entities is in the system
“Live” where artist or performer or audience can manipulate the outcome after listening/ observing the generated sound and visuals.

Environmental Interaction
Room acoustics
Human interaction, behaviours of the participants
Lighting

Sensory Outcomes
Real-time/ live generation of sound and visuals
Audience-defined mapping

As the work is still in progress, I cannot evaluate the sensory outcomes of the work at this point. According to the classic features of generative systems used to evaluate Paragraph 7 (performative instructional piece) such as “emergent phenomena, self-organisation, attractor states and stochastic variation in their performances”, I predict that the sound compositions of “Sound of Stones” will go from being self-organised to chaotic as the participants spend more time within the system. Existing as a generative tool or instrumental system, I predict that there will be time-based familiarity with the audio generation with audience interaction. With ‘higher order’ interactions, the audience will intuitively be able to generate ‘musical’ outcomes, converting noise into perceptible rhythms and combinations of sounds.

Generative Art Reading 2

Amplifying The Uncanny

Analysing the methodology and applications of Machine Learning and Generative Adversarial Networks (GAN) framework.

Computational tools and techniques, such as machine learning and GAN, are definitive to the applications of such technology for a generative purpose. The paper explores the exploitations of these deep generative models in the production of artificial images of human faces (deepfakes) and in turn invert its “objective function” and turn the process of creating human likeness to that of human unlikeness. The author highlighted the concept of “The Uncanny Valley”, introduced by roboticist and researcher Masahiro Mori, which theorises the dip in feelings of familiarity or comfort when increasing human likeness of artificial forms reaches a certain point. Using the idea of “the uncanny”, Being foiled maximises human unlikelihood by programming the optimisation towards producing images based on what the machine predicts are fake.

Methodology 

Machine Learning uses the process of optimisation (the best outcome) to solve a pre-defined objective function. The algorithms used to process data produce parameters that categorise what can be generated (by the choice of function). In producing deepfakes through the GAN framework, the generator serve to produce random samples and the discriminator is optimised to classify real data as being real and generated data as being fake, where the generator is trained to fool the discriminator.

Being foiled uses the parameters generated by the discriminator which predicts signs that the image is fake to change the highly realistic samples produced by the generator. It reverses the process of generating likelihood to pin-point at which point do we cognitively recognise a human face to be unreal, which relates to a visceral feeling of dissonance (the uncanny valley). When the system generates abstraction, where images cannot be cognitively recognised, I would imagine that the feelings of discomfort dissipates. In a way, Being Foiled studies the “unexplainable” phenomena of human understanding and feelings.

Applications

As a study, I feel that the generative piece serves its purpose of introspective visual representations of uncanniness. However, the work should exist as more than  “aesthetic outcomes” and the learning can be applied to various  fields, such as AI and human robotics, that develop and explore human likeness and machines.

The Artificial Intelligence field is quite advanced in the development of intelligent technology and computers that mimic human behaviour and thinking, threading the fine line of what is living and what is machine. Considering the analogue forms of art, Hyperrealism saw artists and sculptors, such as Duane Hanson and Ron Mueck, recreating human forms in such detail that it is hard to differentiate which is real and unreal visually. When it comes to robotics and artificial intelligence, what defines it to be “human” is the responses that are produced by the human mind and body. By studying the data collected on “normal” human behaviour, the AI systems generate responses trained to be human-like. “The Uncanny Valley” explores the threshold of human tolerance for non-human forms, where imitation no longer feels like imitation, which is often referenced in the field. With Being Foiled,  the point where uncanniness starts to develop visually can be tracked and the information can be used when developing these non-human forms.

Geminoid HI by Hiroshi Ishiguro Photo: Osaka University/ATR/Kokoro

Where “Being Foiled” can be applied

When I was in KTH in Stockholm, I was introduced and had the experience of using and interacting with an artifical intelligence robot developed by the university. Furhat (https://furhatrobotics.com/) is a “social robot with human-like expressions and advanced conversational artificial intelligence (AI) capabilities.” He/ She is able to communicate with us humans as we do with each other – by speaking, listening, showing emotions and maintaining eye contact. The computer interfaces combines a three-dimensional screen to project human-like faces, which can be swapped according to the robot’s identity and intended function. Furhat constantly monitors the faces (their position and expressions) of people in front of it, making it responsive to the environment or the people it is talking to.

Article on Furhat:
https://newatlas.com/furhat-robotics-social-communication-robot/57118/
“The system seems to avoid slipping into uncanny valley territory by not trying to explicitly resemble the physical texture of a human face. Instead, it can offer an interesting simulacrum of a face that interacts in real-time with humans. This offers an interesting middle-ground between alien robot faces and clunky attempts to resemble human heads using latex and mechanical servos.”
When interacting with the Furhat humanoid, personally I did not experience any feelings of discomfort and it seemed to have escape the phenomenon of “the uncanny valley”. It even seemed friendly and have a personality.

It is interesting to think that a machine could have a “personality” and the concept of ‘the uncanny valley’ was brought up when I was learning about the system. What came to my mind was at which point of  likeness to human intelligence would the system reach the uncanny valley (discomfort) beyond just our response to the visuals of human likeness. Can we use the machine learning technique that predicts what is fake or what is real on images (facial expressions) for actual human behaviours (which is connected to facial behaviour in the Furhat system)? -> how I would apply the algorithm/ technique explored in the paper

An interesting idea:
Projecting the “distorted” faces on the humanoid to explore the feelings of dissonance when interacting with the AI system

The many faces of Furhat. Image from: Furhat Robotics

Conclusion

While generative art cleverly makes use of machine learning techniques to generate outcomes that serve objective functions, the produced outcomes are very introspective in nature. The outcomes should go beyond the aesthetic, where the concept can be applied in very interesting ways with artificial intelligence and what it means to be human.

INTERSPECIFICS – Ontological Machines

INTERSPECIFICS

INTERSPECIFICS is an artist collective from Mexico City (founded 2013), experimenting in the intersection between art and science (Bio-Art/ Bio-Technology). Their creative practice revolves around a collection of experimental research and methodological tools they named “Ontological Machines”, which involve exploring the communication pathways between non-human actors and developed systems, such as machines, algorithms and bio-organisms. Their body of work focuses on the use of sound and AI to deconstruct bioelectrical data and chemical signals of various living organisms as generative instruments for inter-species communication, pushing our understanding of the boundaries of human nature and its counterpart, the non-human.

ONTOLOGICAL MACHINES 

An exhibition presented by INTERSPECIFICS of two installation-based sets of hardware they define as ‘ontological machines’. The methodological classification serves as a framework to explore the complex expressions of reality, where the systems/ mechanisms exists as communication tools which breaks down the patterns of bio-mechanisms using electromagnetic signals and artificial intelligence.

Image taken at DAAD Gallery, Berlin

Micro-Rhythms, 2016

“Micro-rhythms is a bio-driven installation where small variations in voltage inside microbial cells generate combining arrays of light patterns. A pattern recognition algorithm detects matching sequences and turns them in to sound. The algorithm written in Python uses three Raspberry Pi cameras with Open Computer Vision to track light changes creating a real-time graphic score for an octophonic audio system to be played with SuperCollider. The cells are fuelled using soil samples from every place where the piece is presented, growing harmless bacteria that clean their environment and produce the micro signal that detonates all the processes in the piece. Understood as an interspecies system, the installation amplifies the microvoltage produced by these microscopic organisms and transduces their oscillations into pure electronic signals with which they create an audiovisual system that evokes the origins of coded languages.”

MICRO-RHYTHMS [2016]

Speculative Communications, 2017

“A machine that can observe and learn from a microorganism and uses the data arising from its behavioural patterns as a source of composition for an audiovisual score. This project is focused on the creation of an artificial intelligence that has the ability to identify repeated coordinated actions inside biological cultures. The AI stores and transforms these actions in to events to which it assigns different musical and visual gestures creating an auto-generative composition according to the decision making logic it produces through time. To accomplish this we will development an analog signal collector and transmission device able to perform its own biological maintenance and an audiovisual platform allowing the expression of these biological signals. The resulting composition will be transmitted live via a server channel where the coevolution process can be monitored in real time. Inspired by research centres such as SETI (Search for extraterrestrial intelligence), Speculative Communications is part a research space for non anthropocentric communication and part a non-human intelligence auto-generative system.”

SPECULATIVE COMMUNICATIONS [2017]

 

The artistic approach of generating communications through audio-visual means between non-human organisms  is novel to me. The gathering and processing of data using machine learning algorithms and artificial intelligence and the real-time generation of light and sound through the signals serves as a new way for us to understand these forms. The choice of output may be biased to us as humans but do these non-human forms see a need to communicate? Nevertheless, I think INTERSPECIFICS’ paradigm of work is experimental and innovative, the methodological approach focused on their point of interest in Bio-technology (bacteria, plants, slime molds) and creating a communicative link between humans and non-humans through ‘machines’ – a culmination of  scientific knowledge and computer systems.