STRÄNG Final Installation by Brendan, Bryan and Yue Ling


End-of-Sem Project Proposal:



(Time-Space Warp Simulation)


STRÄNG is a wordplay on Doctor Strange’s name (who bends space and time) which means ‘string’ in Swedish and is also coincidentally related to the string theory about how our reality is shaped.


A superpower simulation that mimics the bending of space and time. There will be a clock mimicked by a running LED strip with RGB bulbs. There will be a one way mirror layer in front of the LED to create an infinity mirror illusion. The mirror will have a circular hole cut in the middle so that people can move their hands inside, and be able to touch a sheet of felt material. The RGB lights will change from blue to red across the rainbow spectrum when they do so (will add sound if time permits), and the running lights will start running at a slower speed.

Aside from that, there will be a servo motor behind a larger sheet of the same felt material away from the mirror that responds to the action by moving the sheet, thus creating the illusion that the participant’s hands are moving the sheet without actually touching the sheet. Whenever the motor moves, the LED lights around the frame of the material sheet will light up as well. In this sense, participants can feel like they are bending both time and space.

Things needed:

  1. LED strip
  2. Wood material for the box (probably will be spray-painted)
  3. (Elastic) Felt material
  4. Frame to hold felt material
  5. Servo-motor (have to fix something on it)
  6. Diffuser frame (borrow from film store)
  7. Ultra-sonic sensors/touch sensors.
  8. Speakers (If time permits)


Stuff to code for:

  1. Code for LED running strip (colours and speed)
  2. Servo motor.
  3. Response between touch sensor in the box and the servo motor on the other side



In the end, we had 2 main codes and used 2 Arduino circuit boards. One was for linking the LEDs and ultrasonic sensor while the other was for linking the flex sensor and servo motors. We shared a lot of the coding workload so it’s hard to be definitive but if we really had to, it would be something like:

Bryan – LEDs

Brendan – Ultrasonic sensors

Me – Servo motors

Challenges faced in the form of advice for future programming students:


#1     Do not go shopping at Sim Lim Tower on a Sunday.

#2     Go to Continental Electronics Pte Ltd #B1-23/24/25 to get WS2811 LED strips which are programmable by Arduino. Those lights are pricey though, 1m for $18 but do it for the project!

#3     If you do not take IM 1 and are deprived of your free Arduino kit, do get wires. Lots of them. Get the male to female connectors too.

#4     Nope, you can’t laser cut glass. Only acrylic mirrors from Artfriend (one costs $26)!

#5     Example codes and libraries are your best friends! We tried coding the running light pattern by ourselves one evening for around 3 hours but still couldn’t figure it out but then I found an example code and it solved like 80% of our problems. Just trust open-source culture.

#6     Yes we have flex sensors that can be borrowed from the film store. If you get the one soldered down you need male to female connectors.

#7     Loose wires were a huge problem for us, we had yet to try hot glue but it’s worth the try! Getting longer wires would probably have helped!

#8     The fabric of the black flag borrowed from the film store is way too tough. We just used Bryan’s old shirt. (We really wanted to use a stretchy material like Spandex initially but that’s too expensive)


Future Developments:

This installation may be small (because we’re on a low budget) but imagine being in a room with a massive infinity mirror where you can change its lighting using gestures! When you raise your hand in, motors move the other side of the room! T r i p p y

On the day of presentation we didn’t include music, but when we set it up the day after in its proper orientation and played music, it really gave the installation the atmosphere it needed!! Really important element for immersion.

Also do give a proper preface to your viewers about your installation! A well-conveyed context isn’t only fluff!!! It gives the installation more meaning and affects the way the participant views the artwork!

It is probably worth the try to place the ultrasonic sensor behind the thin piece of fabric and propped up by some piece of foam instead of placing it directly at the end of the tube since it was easily accessible by the participants and they could have just pulled it out.

P.S.: I just named a picture of the LEDs “strang” + “led” but I realised it literally spells “strangled” oh god




By default, the LED lights are a combination of warm hues. The light gradient is also rotating at a quick and steady pace.
As the participant’s hand reaches deeper into the tube, the distance sensed by the ultrasonic sensor decreases and this changes the hues of the light to cooler hues and slows down the pace of the light rotation.
When the participant’s hand reaches all the way to the end of the tube, the lights change completely to a bluish hue with purple light rotating.


Here is a video of a demonstration done by our participant Jacob in the video below!

As the participant’s hand reaches the end of the tube to touch the cloth material at the end of it, they can push even further to bend the flex sensor. This changes the resistance read by the sensor and thus triggers the 3 servo motors coded for in the same program.

[Final Hyperessay] teamLab – Graffiti Nature: Lost, Immersed and Reborn (2018)


Image taken from: (edited)
teamLab: Graffiti Nature – Lost, Immersed and Reborn (2018)

One of teamLab’s most recent art installations Graffiti Nature: Lost, Immersed and Reborn (2018) is situated in Amos Rex, an art museum in Helsinki, Finland. It is just one of the many exhibitions that teamLab has globally in countries such as France, Japan, and even Singapore. teamLab is based in Japan and is an “art collective” of “ultra-technologists” that consists of engineers, programmers, CG animators, graphic designers, editors and many more positions and is headed by Toshiyuki Inoko. The interdisciplinary nature of their team is well-reflected in their art installations that often deal with using light as paint and the world as their canvas (Mun-Delsalle, 2018). teamLab utilises interactivity and advanced technology used in the development of hypermedia to blur the boundaries between the physical and virtual world and elevate the extent of immersion in Lost, Immersed and Reborn.


Interactivity is a forte of this installation, and further enhances its immersive quality. In Nobert Wiener’s Cybernetics in History, he discusses about the role of an artist as a ‘steersman’; a designer of a ‘catalyst’ that enables a stable reciprocal exchange between human and machine (Wiener, 1954) and we are able to project this concept unto the context of Lost, Immersed and Reborn.


Cybernetics in the context of Lost, Immersed and Reborn.
(by Tan Yue Ling)


In this digital interactive installation, a virtual ecosystem made of projected light fills up the entire room. Participants invited to interact with the myriad of virtual flora and fauna within by colouring in templates with contours of animals and flowers and scanning their drawings. Once their drawings are scanned, they are immediately transformed into animated graphics that appear three-dimensional and join the rest of the virtual ecosystem where participants are then able to illicit responses by ‘touching’ them. The flora and fauna to which they react differently when ‘touched: the animals within the ecosystem can ‘eat’ each other, if participants do not move, more flowers will grow, if participants step on the animals, they explode into a splat of colours. teamLab uses light as canvas, essentially incorporating real life characteristics of nature into this virtual ecosystem.


How it works:


The idea of entropy within this piece is evident with how teamLab partially gives up ownership of the artwork to participants, who have the freedom to create and interact with whichever virtual element to illicit whatever response they chose to evoke. teamLab’s use of sensors reminds me of John Cage’s Variations series, whereby kinaesthetic sensors were used to record and evoke different artistic outcomes. In Variations V, the dancers were the participants who created different sounds using their movements while in Lost, Immersed and Reborn, the public are the participants who created different visual outcomes within the space using their movements which are similarly detected by sensors.

Since participants’ actions were unpredictable,  the visual dynamic of the room was constantly changing in an unprogrammed and indeterminate manner, in the sense that every other day, the change in the room’s appearance would be different from the day before. With reference to Roy Ascott’s quote on interactive art:

“Interactive Art must free itself from the modernist ideal of the “Perfect Object.” (Ascott, 1966)

teamLab has successfully facilitated an organic outcome in Lost, Immersed and Reborn resulted from the unpredictability of participant’s actions, something that would not be achievable without the participation of both man and machine. Giving participants the responsibility of creating the artwork heightens its immersive factor since participants feel like they exist in and are able to affect the virtual world.


Undoubtedly, technology is the backbone of teamLab’s artworks, including Lost, Immersed and Reborn. The state-of-the-art technological devices that teamLab employs bank on a long history of technological development. Earlier works such as Sensorama were limited by the level of advancement in technology.

Sensorama by Morton Heilig

In Sensorama (which was launched in 1960) although technological features such as chemical smell simulation and binocular vision was incorporated, interactive features like a knob or joystick which would translate physical force into a response in the virtual world was largely absent. This made the experience still rather passive and consequently less immersive.

A later example of Aspen Movie Map (launched in 1978) had a touchscreen function which enabled participants to make associative and non-linear choices along the drive route. However, there were still limitations such as only enabling the participant to view the route in intervals of 10 feet and only being able to move in a fixed number of directions and made it hard for participants to be fully immersed in the virtual driving experience.

Aspen Movie Map

In contrast to these rudimentary works, the advancement of technology has achieved immense amount of success in enabling the recreation of elements of reality into the virtual world. Current new media is able to expand the physical world by transcending its boundaries. teamLab uses software such as Unity to generate three-dimensional graphics from the scanned images in Lost, Immersed and Reborn.  In this way, art is transferred from a physical medium to a digital medium that acts as a representation of participants’ telepresence in this virtual ecosystem. The virtual ecosystem also acts as an ‘informational surrogate’ (Fisher, 1989) that stores a large volume of digital data that helps to mimic nature in a digital medium, for example how the movements of a lizard are replicated in the virtual environment. The flattened three-dimensional graphics also showcase teamLab’s “Fold, Divide or Join” principles of viewer centricity inspired by the concept of Ukiyo-e as Japanese ultra-subjective space, essentially creating a stereoscopic and kinaesthetic visual within a physical room to better simulate a first-person immersive experience.

“Multiple points of view places an object in context thereby animating meaning.” – Scott Fisher in Virtual Environments (Fisher, 1989)

The hardware used in Lost, Immersed and Reborn, includes the use of stereoscopic sound devices, light projection and sensors, which allow participants to be immersed seamlessly into the organic virtual ecosystem, choosing where they want to go and where they want to touch to evoke a response. The pace at which the animals move or respond is controlled by the participants, and not passively moving in a programmed manner at a fixed time interval. With the help of technology, the potential for an installation to grow as an ‘informational surrogate’ becomes immense and the number of possible ways to duplicate reality increases as well.

This can be best represented by the Reality-Virtuality Continuum (below) which presents the entire possible spectrum of immersive works as a category:

A representational figure of the reality-virtuality continuum as proposed by Proposed by Milgram and Kishino in A Taxonomy of Mixed Reality Visual Displays (1994).

It can be observed through previous VR works that as time passes, developments in technology allow for the creation of more complex systems featured in installations that expand the boundaries of computer-human interface towards invisibility, essentially pushing more VR works towards the direction of reality (i.e. augmented reality games like Pokemon Go or camera filters). As an installation that incorporates virtual reality (VR), Lost, Immersed and Reborn is eligible to be considered on the reality-virtuality continuum (Milgram, Paul & Kishino, Fumio, 1994) (Fig. 1) as augmented virtuality since it incorporates real time information into a largely virtual world. 

In Lost, Immersed and Reborn, there are various modes of interaction including scanning, touch sensors and sound by which physical force translates to digital response. However, many elements that could potentially make it “The Ultimate Display” (Sutherland, 1965) which is defined to be “a room which a computer can control the existence of matter”.  The perfect sandbox would give complete liberty in terms of decision making, engage all five senses and resemble reality so closely that there is suspension of disbelief without thought. teamLab’s design philosophy of bringing people together and co-creativity reflect extremely well in Lost, Immersed and Reborn, even if it’s within a virtual space. Perhaps in future artworks, teamLab might be able to explore the incorporation of other cues that engage more senses simultaneously such as smell and taste; the possibilities of immersion to explore are virtually endless.




Careers | teamLab / チームラボ. (n.d.). Retrieved from

Mun-Delsalle, Y. (2018, August 13). Japanese Digital Art Collective TeamLab Imagines A World Without Any Boundaries. Retrieved September 7, 2018, from

Wiener, N. (1954). Cybernetics in History. In Multimedia: From Wagner to Virtual Reality.

T. (2018, August 29). Graffiti Nature: Lost, Immersed and Reborn. Retrieved from

Roy Ascott, “Behavioral Art and the Cybernetic Vision,” 1966, Multimedia: From Wagner to Virtual Reality

Ultrasubjective Space | teamLab / チームラボ. (n.d.). Retrieved from

Fisher, S. (1989). Virtual Environments. In Multimedia: From Wagner to Virtual Reality.

Milgram, Paul & Kishino, Fumio. (1994). A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Information Systems. vol. E77-D, no. 12. 1321-1329.

Ivan Sutherland, “The Ultimate Display,” 1965, Wired Magazine

Artist Selection: teamLab

Universe of Water Particles on Au-delà des limites


For my Final Research Hyperessay, I am stoked to find out more about teamLab, an artist collaborative group based in Japan that currently is having one of their exhibitions Future World in Singapore! Since I’m taking Viscomm and Programming, I find their works really relevant and hope to learn more about their design philosophy.