Week 9 (Oct 15) – Guest talk: Philippe Kocher – Updated!


Consultations notes and material

Daniel Shiffman’s online book The Nature of Code Simulating physical phenomena in Processing

New Media Art Festivals
In their online archives you will find rich repositories of conference papers, artwork examples, tutorials, etc., about new media art and design, including generative art.

ISEA, various locations worldwide (base site)
Ars Electronica, Linz
SIGGRAPH, various locations in USA and Asia/Pacific
SIGGRAPH base site
Transmediale, Berlin
xCoAx, various locations in Europe
Generative Art International Conferences, various locations in Italy

Guest Talk

Philippe Kocher


Partikel vernetzt
Dodecahedron setup (time lapse)
Trails II
Etüden für Klavierautomat

Institute for Computer Music and Sound Technology


Generative video feedback loops (analog) by Philip Galanter

Shepard tone
A good example in film music for Christopher Nolan’s Dunkirk (2017)
In visual arts: Penrose stairs, and Ascending and Descending by M.C. Escher 


Q: How does the whole system get initialized – by playing one or more initial sounds, or is this random?
A: The whole piece consists of several parts. For each part, I used a different set of delay times to obtain different sound qualities. I excited the feedback system every time with a short noise signal (duration: approx. 0.5 seconds).
Q: I suppose there is virtually no repeatability of the overall sound in this installation?
First: a noise generator is a random generator. Even though it sounds the same to our ears, the actual succession of random number can be completely different.
Second: a complex recurrent network can amplify even the smallest details of the input and make them audible. If we combine these two facts, the output of the system can be different each time it is excited by noise.
Therefore, I cheated. I recorded a noise generator and used this sample. In fact, this installation always sounds exactly the same.

Q: Are you (or actually Daniel) considering constructing a smoother/less jagged visualization in openFrameworks or in TouchDesigner?
A: You really should ask Daniel. But I remember a few things that he told me during our collaboration. There was always a compromise: Daniel tried to achieve the best visual quality that was possible in real-time on a MacBook back in 2013. And he was into this ‘rough’ look because he did not like visualisations that have a too smooth or streamlined look.

Q: What is the performative and perhaps sonic difference compared to automatic pianos of the 18th and 19th century?
A: You are right; music automata have a long history. For a long time, they were the only devices to reproduce music. Sound recording technologies, the phonograph etc., were only invented in the late 19th century. Early music automata were organs operated by a pinned cylinder or chimes in clockworks.
The ‘player piano’ was popular in the late 19th / early 20th century. Many famous pianists recorded their playing on these perforated rolls (these are actually the earliest music recordings that we have!) It was an instrument for music reproduction. Only in the 1920s avant-garde composers began to discover the intrinsic qualities of this instrument.

Talk Outline

I use generative processes for most of my works that range from electroacoustic to instrumental music, and from sound installations to concert pieces. I will talk about three of my projects. The first one – Partikel vernetzt – exhibits a sound synthesis technique that is based on self-regulating audio feedback. Interestingly, this synthesis is in itself generative. The second collaborative project – Trails II – uses a multiagent system to generate its visual and acoustic content. The last project deals with more traditional algorithmic composition. It is a series of short studies for an acoustic piano played by an automaton.

Philippe Kocher is a musician, composer and researcher. He studied piano, electroacoustic music, music theory, composition and musicology in Zurich, Basel, London and Bern. His artistic and scientific work encompasses instrumental and electroacoustic music, sound installation, algorithmic composition and computer-generated music. He is a research associate at the Institute for Computer Music and Sound Technology in Zurich, and a professor at the Zurich University of the Arts.