Using the concept of a heliostat, my flashmob performance explores the idea of tracking the sun’s movement. A heliostat is a device that includes a mirror, which moves to constantly reflect sunlight towards a predetermined target, compensating for the sun’s apparent motions in the sky. The target can be a physical object, distant from the heliostat or a direction in space.
A heliostat field is a solar thermal power plant using computer controlled heliostat systems using data that tracks the sun position to focus the sun’s rays onto a special receiver at the top of the tower. The concentrated solar energy will be used to create steam that will produce electricity.
Building upon my InstructionArt project ‘Choreographic Light’, which uses artificial light to generate movement, I want to explore the movement and reflected position of natural light. My initial idea was to use the behaviours of reflected light (from the sun or a light source) to create ‘light’ drawings onto a surface, where the performers behave as ‘human’ heliostats by moving or flipping a mirrored surface to the scripted direction.
However, for the FlashMob project, I wanted to involve more of the body and bigger movements (ironically, as heliostats can’t move) to create a less static performance. Additionally, due to the restriction in the number of people and material, I was not sure if I could generate an obvious enough effect (of light).
Performance – Heliostat Field
Instead of generating light as an output, I decided on generating sound as a “predetermined target” according to movement and position of the human heliostats. This is also in line with my semester project of generating sound using visual textures.
Moving on to a more conceptual understanding of a heliostat, the performers would still be instructed to track the sun’s movement but in a more abstract manner. Using a website that provides real-time data on the sun’s direction and altitude specific to Singapore (https://www.timeanddate.com/sun/singapore/singapore?month=12), I would create a score for the participant’s movement according to the live changes in the sun’s movement.
Parameters of Performative Space
Referencing the sun-path chart, the course of the sun movement relative to a location is in the form of a convex lines that varies outwards per time frame (a day). As heliostats, the performers would travel along the paths of the “sun” relative to the position of a specific “location”, which is the kinect connected to the computer to generate sound.
Using string and tape, I mapped out four curves of 10 points around the kinect. The performers would use the points of the web as parameters to navigate the space.
It’s interesting how the shape of the sun paths holds some similarity to the heliostat field.
Score for movement
The participants will be instructed to stand in different points and directions in front of a kinect that is connected to a sound generating system that I have been working on on Supercollider and Processing. According to the changes in real-time data of the sun, the participants have to move according to the score of my designation.
Focusing on the sun direction data, a value and direction is given. Relative the original position (north) of the participant, he/ she has to rotate to face the arrow of the sun direction. The numerical data of both the sun direction and sun altitude changes. Whenever there is a change in values on the screen, the participant moves forward.
- Stand at any point on the map, facing different directions
- Use your phone to access website (https://www.timeanddate.com/sun/singapore/singapore?month=10&year=2020)
- Rotate your body according the the arrow relative to where you are facing
- Move according to the score when the values change.
The kinect maps the depth positions of the ‘human heliostats’, and plays a sound whenever a certain depth is reached, generating a soundscape.
See more on my process of generating sound using real-life movement: (https://oss.adm.ntu.edu.sg/a170141/week-7-generative-sketch/)
The kinect, depending on its position, only tracks the depth and movement of a limited range. Referencing how the sun can only be seen in the day, the kinect does not capture the entire performative area, but the system only interacts when the performers move in front of its range. Whenever the performers enters the captured range of the kinect, a sound would be played according to the change in depth value.
To prevent the sound from cluttering (too many nodes), I reduced the skip value (originally 20) in processing to 60 so that the depth value is extracted for every 60 pixels instead of 20.
For sound generation: Whenever the b value (brightness/depth) obtained is more than a certain value (>140) meaning that forms are captured within the range of the kinect, a message is recorded in Processing. I connected processing with SuperCollider that I could design sounds using the (Synthdef function). Connecting processing to SuperCollider using an external NetAddress, the sound is played whenever the message is sent.
In the future, I can consider designing more specific sound and using more data points from the kinect to generate a greater range of sound. I tried using x and y values from the kinect and mapping it to two different sounds but SuperCollider could not handle the overwhelming influx of information and the soundscape was very cluttered and laggy.
I could also vary the sound according the the depth, to map the sound to the individual performers. I would also like to expand the scale of the performative space in a public space and increase the number of participants.
‘Heliostat Field (2020)
Interactive Performance by Alina Ling
Performed by Jake Tan and Nasya Goh (Thank you! :-))