Cascade no. 2 is a continuation of the analog version of Cascade, but utilising different type of strings – aka rubber bands. Intended for audiences to pull and interact with, the elastic bands are meant to generate sound feedback, which would be more processed the further one stretches the bands.
However, I altered the structure of the initial concept as pulling sideways is a more practised movement than pulling it downwards. Pulling the rubber bands from underneath would also cause the elastic bands to pull back and jump around, potentially messing my strings. Hence, the sideways arrangement of the final project is more idealised for both restricting rubber band elastic feedback/movement and adapted for used human gestures.
Prof feedback-ed that this project might be an instance of too many details, whereby it could be further simplified yet bring across a more ‘purified’ message. I wanted each band to sound their own unique soundtrack, which instead made the patch more complicated. I also intended for the feedback sound to be more in tune with the vibrations (and hence more responsive), instead of just relying on changing the speed. However, I was not able to produce these in this piece. These are simple details, but yet were exceedingly crucial for this project to be successful. I should have tried it out earlier, and ruminated more on the different types of options for this project – eg. recording the physical twang sound and manipulating it instead of using a pre-recorded sound – perhaps, this would strengthen the linkage of sound to object, and increase responsiveness of the project.
Overview Walking on air (woa) is an interactive installation that allows visitors to trace their own paths in a smoke-filled environment, and experience being above the clouds.
Concept: Creating an otherworldly experience, an experimental space where the body becomes the instrument. Individual bodies are disregarded, and instead become part of the bigger picture.
Logistics: Smoke generating machine, strong light projector, video camera, all situated within a contained room (tentatively truss room)
Functionality: In woa, a smoke generator will create a foggy atmosphere in the room, and the projector will continually project a wavy line into the smoke at a height slightly above the calf. When light is projected into smoke, a smokey, surreal form independent to the initial wavy line is created. This creates the illusion of clouds forming, and walking above or on clouds. When there is movement, a motion camera tracks the movement and a light trail formed by the movement (eg walking) is immediately projected at the location where movement is detected. Where movement overlaps or take longer to move, the light trail becomes brighter. Over time, the light trail will dissipate (with the dissipating effect) and fade away, resuming the original sight
There will also be an accompanying soundtrack to the installation. When the new light trail is created, a musical note is played, extending for the entire duration of which the light trail exists. When the light trail fades, the volume of the accompanying note will also fade away, proportionate to the brightness level of the light trail. Each light trail will generate a musical note in a randomised pitch; it is hoped that the different walking styles of people would generate a pleasant harmony.
Colour for the light beam would be restricted to only white, to reduce visual distraction and to preserve an ethereal feel. Music notes would similarly adopt a
Technicalities: MaxMsp would be used for motion tracking, and also as a sound synthesiser.
Aim: For woa, I wish to achieve an ethereal feel bordering on minimalism, and going back to the basic, rudimentary elements. The visuals and surroundings are kept sparse, and available elements kept strictly little. However, I want to imbue a slight element of play, to make interaction engaging and open to all age groups. It is hoped that visitors leave the installation entranced, as though they had just visited an alternate world. At the same time, a slight element of bodily displacement would be created, with the visitor, having an extremely wide movement range but without a set moving path within this changing, unfamiliar environment.
Inspired by Anthony McCall’s You and I, Horizontal, ideally, my installation would also bring across a simple sensation using the most basic instruments.
Interaction: Walking, running in the path of the light, backtracking, circulating
1.You and I, Horizontal II (Anthony McCall)
The usage of the smoke and light beams originated from this work. However, I would like to push it further and make it more interactive with accompanying sounds.
2.On space time foam by Tomàs Saraceno
I would like woa to adopt a similar premise – immersive, playful, yet simple. Perhaps I could analyse the movements and behaviour of visitors who took part in the installation and predict how they would behave in mine.
3.El Claustro by Penique Productions
Penique productions changes the space by blowing up a balloon and wrapping all the items in the surroundings with rubber. Here, they have changed the space visually.
Despite their works being rooted mainly in digital technologies, it was fascinating that teamLab continues to integrate traditional Japanese/East Asian aesthetics into its works – modernising how we view traditional art. Takasu-san highlighted that the Future World exhibit was how teamLab envisioned the future – through a digitised playground grounded in traditional play structure. Inevitably, he hints that the future is in technology, and the ubiquity of it transcends our everyday living – starting from the next generation. He also sparked this question in me: was teamLab trying to change the art scene? Traditionalists might argue that their works seem too avant-garde, however, by extruding and integrating the human quality of play, teamLab keep their works accessible – to both traditional and digital art.
With the privilege of having Takasu-san to explain the artworks, it was good to finally realise how the artworks were carried out – using MaxMsp to produce the sounds, lazer sensing technology to accurate trace position and motion. Technology wise, my skills pale in comparison greatly to teamLab’s, but it was an eye-opener to see how far technology with the team of the best expertise could further art. Similarly, for our following FYP, we might want to embark on a larger scale project but lack the expertise. Though on a smaller scale, we could take on what teamLab has epitomised – drawing on the expertise of many and creating a collaborative project.
Another idea that Takasu-san brought up was the instance of people of the Silicon valley not purchasing art as they ‘looked forward’ at not behind (hence hinting that art was of a behind state). However, he later stated that art with digitised medium is not lagging behind, but in contrast was fronting the battle, with teamLab’s Light Sculpture of Flames being purchased for permanent collection later. Perhaps, teamLab tries to pry open the lid of the present, jogging towards the future of art.
levelHead uses a hand-held solid-plastic cube as its only interface. On-screen, it appears each face of the cube contains a little room, each of which are logically connected by doors.
The visual output is captured via a camera, and later overlaid onto the printed, checked. After which, the entire image (background and computerised overlaid graphics) and then projected onto a different, larger screen.
In one of these rooms is a character. By tilting the cube the player directs this character from room to room in an effort to find the exit.
Some doors lead nowhere and will send the character back to the room they started in, a trick designed to challenge the player’s spatial memory. Which doors belong to which rooms?
There are three cubes (levels) in total, each of which are connected by a single door. Players have the goal of moving the character from room to room, cube to cube in an attempt to find the final exit door of all three cubes. If this door is found the character will appear to leave the cube, walk across the table surface and vanish.. The game then begins again. Source
It is a very interesting idea, that a simple object can be transformed into a device, without the object itself having any technical aspect. Rather, the object, or in this case the device, acts more of a medium on which a screen-based projection is overlaid on. The blend between the physical object and the projection here is seamless, and feels intuitive enough for the user. Personally, it is a very clever and simple idea.
From the documentation video, I noticed that the projections were a little too small for the eye to look at, but in subsequent installations, a larger screen was used to circumvent this limitation. In addition, I found the concept of this art to be very relevant to the medium – the physical space of imaginary and unseen architecture (through a digital world), realised through physical muscle memory and brain memory, reflects how modern day memory formation is created, through artificial computerised means, yet still reliant on ‘traditional’ techniques.
It consists of:
– 5 x 5 x 5 cubes, unique image (marker) on each face
– Computer with LinuxOS
– Sony EyeToy Camera
– Clean White Surface
Phonotonic is a smart object and an app that changes motion into music, blending the physical and musical world together. By shaking the Phonotonic, corresponding musical beats, melody or sound effect will be blasted through external speakers. Different musical instruments can also be changed, by using an accompanying Phonotonic application
The Phonotonic sensor can also be removed, and placed onto other surfaces eg. parts of the body. Dance moves, or other motion, could thus activate more unique music playing. One can also opt to combine two or more Phonotonics, for a richer orchestra.
It could be really useful for teaching music to children, or for therapy sessions. It’s compact size, along with its simple design, makes it easy for anyone to use it. However, the free movement required to play music with it has its con – the music played is hard to standardise should the same tune be required to be replayed.
See it in action:
(Dancers with Phonotonics attached to their body parts) – Skip past 1 min
Does it not remind you of a tun-tun (pig-stick used by the Iban people in Borneo/Malaysia to lure pigs into traps)?
Much physical resemblance between the sketch draft and the actual object; yet inspiration was not drawn from the tun tun. Sole commonality remains their names.
My project, aptly named “Tuntun”, features a sphere-shaped human-like head, where controls are placed around the head, eg. mouth, top of head, ears, to mimic a human making sound with his own facial features.
Below shall show a sketch of the areas with sensors:
At current, the ‘head’ is not placed in the order order and position. Further improvements in the patch are left to be desired.
The patch is currently incomplete, but here is a quick insight into some parts of it:
I used Gizmo~, Buffer~, Groove~ in replacement of playlist. Certain tweaks are required – for instance, the song abruptly stops playing when the trigger switches the toggle off. I am trying to include a timer, or delay, to allow for the entire soundtrack to play before it switches off.