Costume and Video
Of all the items presented in the exhibit, the one that intrigued me the most, in concept was the avatar machine exhibit by Marc Owens. Although I wasn’t able to try the rig myself. I was able to understand how it functions.
Avatar machine is basically a video camera attached to the back of the user. The user is blindfolded, with his only way of navigating through the streets is by camera footage. The idea of this exhibition was to give this individual a third person point of view in real life and see how he/she would navigate.
As someone who has played lots of video games, including third person combat games such as grand theft auto, Arkham Asylum and The Witcher 2, I find it fascinating that such an exhibition exists where you get to play these fantasies out for real. I can only imagine such an experience to be disorienting as you no longer feel like you’re in total control. In fact, one might even develop an out of body experience
Though simple in concept, there are a few tricks when navigating via the avatar machine. First of all, the footage may be slightly delayed due to latency, making the user unsure as to whether they have accomplished a task. Secondly, they may be able to see themselves, but would be have to take note of the fact that the rig is still behind a user. I found footage of someone standing in the middle of the highway with the device. If he had turned his head a little, one of the passing vehicles would have slammed into it, tripping the user into oncoming traffic.
In the end, however, the device looks like fun, and I would think about doing something like that in the considerable future if I have the time.
The first idea would be to design a touch based instrument, which is a set of makey makey sensors rigged to a t-shirt. The idea would be to turn the individual into the instrument. Other alternatives include a set of controllers which would be rigged to an MP3 player. The reason why this may be useful is because it eliminates the need to fumble through one’s pocket to manually change tracks. An additional “lock” feature may be added to prevent any accidental contact.
The second idea would involve the use of a simple animated character that reacts to human interaction. The device would be a touchpad or a set of sensors that triggers a response. A similar device to this would be tamagotchi. Technically the main hurdle in creating such a device would be to setup the led screen. However, the input would be a simple button.
I will be adding a twist to this device in which the character would perform like a clicker device, rewarding the user with new reactions and emotes based on the number of clicks received and the rate of clicking. For example, the more aggressive the user is with the device, the more movement/ action the animated character would make. This device is a statement on clicker-styled games on iOS/android games.
So apparently this is a thing that exists. Though makey makey is relatively simpler to construct, i found its applications and implications more interesting to me. The idea here is that inanimate objects have anthropomorphic properties imbued into them. For example, the following carrot screams when it is cut, much like what would happen if an actual person is being chopped, and the water fountain would complement users who use the water fountain… and so on. This interaction, if worked on further, could reference more sentences/interaction ideas from the web and “communicate” with the user. In this instance, the action that is performed is specific (drinking water), so the sensing and affecting is
The makey makey consists of a few connectors, plugs USB cables and a crocodile clip. The user can create a customizable controller or even a conductive surface like pencil sketches. Once the user interacts with the objects, the circuit is completed and a low current would pass through. In order to make the device a little more portable, a wireless system can be setup, with the crocodile clips being hidden a little better.
I can see this device be applied to public benches, automatic doors or lifts, that will react to human presence (ie. react according to how many times its used , or reacting to whether the user is pushing a door marked pull), though there will be those that will argue that doing so will either be invasive or awkward. An example of this that i can think of at the back of my head would be the robot character Gerty from the movie “Moon”. Though that robot had a monotonous voice, it was able to express its emotions through the use of emojis. I can see this replacing greeters in stores too.
The next installation that I would like to talk about is the interactive art device called Tangible media.The device is a new take on the Pinscreen, a popular desk toy that allows you to create a rough 3-D model of an object by pressing it into a bed of flattened pins.
In this build, the pins are connected to a motor controlled by a nearby laptop, the camera from the kinect takes in infra red signals from the user and projects them onto the pinscreen.
This project had me thinking about how I would be able to do something similar with a much smaller budget. Using a webcam, I would then aim it downwards onto my arm and have the setup lighted downwards so that the image registers contrast clearly and keeps the shadows on the corner of the arms, rather than having it one side over the other. I would then be able to use the rgb values/luma values and use cellchecker in max to get values from each individual pixel before sending them to each individual pin.
There are a few limits with this setup however, although the pins slide up/down fairly well, they are’n’t entirely 3 dimensional and still require the user to look at the installation from one single vantage point. I could see this installation being applied towards representation of data or physical topographical representation in maps. On a much larger scale, i could foresee this being used to redefine topological/partitioned spaces in shopping malls, depending on the kinds of events or roadshows which would be presented in the atrium.
One small device which may use this application would be a morphological tactile surface controller, similar to a keyboard that changes ergonomics, which brings me to the next device…
Another device that tries to use the same principle of a modifiable tactile surface is the customizable MIDI controller. The thing that makes this controller different is in the detection of velocity and dynamics, creating a more nuanced controller that can play notes of varying volume. This principle is also applied to the stylus pad, allowing users to paint more naturally, compared to the rigid wacom intuos 3 stylus, which has to be custom controlled, and has a more limited tilt and pressure sensor which has to be calibrated. The selection of brushes would also be limited there. However, the sensel skips the hassle of brush selection online.
The touch pad itself is intuitive, and is a simple more plug and play configuration. Other configurations include the trap set, keyboard and the MPC controller.
The device that i want to talk about is the steel sky exhibition by Christoph De Boeck. This exhibition consists of a few steel plates suspended in the sky and rigged next to a high frequency reverb mechanism. The interface is then connected to a wifi headset that gathers 8 signals of brain signals from whoever is wearing it. Using the brainwave patterns, the reverb mechanisms would strike on the metallic plates, simulating what the skies would be like if they were made out of something as dense as steel.
The technology is provided from the Holst center, which is an independent R&D technology, and is involved with wireless autonomous sensor technology. It developed a wireless electroencephalogram (EEG) headset which fits comfortably , and can monitor moods in daily life situations using a mobile app.
Our brain basically process information through electrical signals generated by electrically excitable neural network. When stimulated, molecules pass through the cell membrane and passes through the synapse, which is the gap between the neurons. This electrical signal is then picked up by the device.
The headset tracks localized and synchronized activity in the parietal lobe and the occipital lobe, and is able to track if the user is blinking, smiling, raising his/her eyebrows along with many other complicated expressions. The user would at first think about a specific action, and the resulting brain activity would be recorded and logged in a database. So whenever the user thinks about performing that same action again, the computer will take the nearest approximate value in the database and perform an action.
On a technological level, I found this interesting as the headset could potentially be marketed to the general public as a form of wireless and hands-off approach to interacting with devices, and the most amazing aspect of this concept is that it is already put into practice, with augmentations and prosthesis.
I could imagine someone registering all kinds of brain signals based on different kinds of thoughts, and then be able to open doors and cabinets without even uttering a single word or lifting a finger.