Google Glass /Device #5


Google Glass, is an optical head-mounted display designed in the shape of eyeglasses, by Google X (now renamed X).

The Google Glass, when worn, displays information and allows the user to do various simple functions, eg. snap a photo, send messages or images, by activating a voice command or toggling the capacitive touchpad along the right side of the glasses. This information, in the form of images of text, will be overlaid onto a glass prism at the front of the glasses, without obscuring our current vision.




Watch how it works:

Other functions that the Google Glass does:

  • Remind the wearer of appointments and calendar events.
  • Alert the wearer to social networking activity or text messages.
  • Give turn-by-turn directions.
  • Alert the wearer to travel options like public transportation.
  • Give updates on local weather and traffic.
  • Take and share photos and video.
  • Send messages or activate apps via voice command.
  • Perform Google searches.
  • Participate in video chats on Google Plus.

These information are then projected slightly above one’s line of sight;



Its metaphor, “looking into the future” is very suited considering the design of the product. The Google Glass eliminates an external device, eg. a phone or computer, and integrates it into a more convenient, consolidated device. However, I felt that its functions were only rudimentary, and does not warrant the $1,500 price tag.

ar-150419546 screen-shot-2013-09-12-at-9-36-27-am

[Cons] The price tag in question deters the common user, and ironically the product was made for the common man, in lieu of a commonplace/day item such as the handphone. Here, there is a mismatch in product function, with marketing cost of the product. 3 years after its debut, the Google Glass was deemed a failure for the mainstream market.

One particular function to note is that the feedback for the Google Glasses, is only limited to the user – eg the Google Glass is recording a live video, but other users will not realise it. Only users can see it from their projected, inner screens. Non-users may feel intruded upon, but perhaps this was what Google X wanted to achieve – a product that does not feel too much like a foreign product, hence they eliminated this feedback. It does not however bode well for other non-users, who may feel disengaged from the user himself. Another comment on the Google Glass was on its design – it looked too futuristic, and not so commonplace an item for one to use it daily, which ironically it was intended for that function.

Another design ‘fault’ which I disliked was that the projected screen was only situated on a single lens – I felt that if I were to use it, I would squint to focus on the screen – not very ideal, nor intuitive.

[Pros] Despite this, the product seems very intuitive – in navigation, wearing, and its outcome. Simple swipes (up, down, front, back) could be used to toggle the interface, making it easy to learn and manipulate. It’s worn over one’s eyes intuitively like a spectacles, and does not deter actual vision – a plus point. In addition, the function it offers, displaying a map, replying messages hands-free, creates a more ‘human’ experience without the need/feeling for another extended, foreign device.

Nevertheless, the Google Glass does exhibit unlimited potential, and can be used in more specialised fields, for instance telemedicine, teaching, or in conference calls or reporting. The unlimited potential of this Augmented Reality device can be tapped onto, and further adapted to suit our current needs.

levelHead /Device #4

levelHead is a spatial memory game by Julian Oliver.


levelHead uses a hand-held solid-plastic cube as its only interface. On-screen, it appears each face of the cube contains a little room, each of which are logically connected by doors.

The visual output is captured via a camera, and later overlaid onto the printed, checked. After which, the entire image (background and computerised overlaid graphics) and then projected onto a different, larger screen.

In one of these rooms is a character. By tilting the cube the player directs this character from room to room in an effort to find the exit.

Some doors lead nowhere and will send the character back to the room they started in, a trick designed to challenge the player’s spatial memory. Which doors belong to which rooms?

There are three cubes (levels) in total, each of which are connected by a single door. Players have the goal of moving the character from room to room, cube to cube in an attempt to find the final exit door of all three cubes. If this door is found the character will appear to leave the cube, walk across the table surface and vanish.. The game then begins again.


It is a very interesting idea, that a simple object can be transformed into a device, without the object itself having any technical aspect. Rather, the object, or in this case the device, acts more of a medium on which a screen-based projection is overlaid on. The blend between the physical object and the projection here is seamless, and feels intuitive enough for the user. Personally, it is a very clever and simple idea.

From the documentation video, I noticed that the projections were a little too small for the eye to look at, but in subsequent installations, a larger screen was used to circumvent this limitation. In addition, I found the concept of this art to be very relevant to the medium – the physical space of imaginary and unseen architecture (through a digital world), realised through physical muscle memory and brain memory, reflects how modern day memory formation is created, through artificial computerised means, yet still reliant on ‘traditional’ techniques.

Creation Process

It consists of:
– 5 x 5 x 5 cubes, unique image (marker) on each face
– Computer with LinuxOS
– Sony EyeToy Camera
– Clean White Surface


Softwares used:



Final Project – #1 Proposal The Guiding Pen


This was my initial proposal for the final project – as I was inspired by wanting to help children learn how to write, I decided to create a pen which would help to correct their grip, and assist them in practising writing out of letters.

In brief, the pen would vibrate when the user writes out of line, or if incorrect grip has been sensed. The Guiding Pen can also have an extended user database than simply children – stroke patients, or people who may have diminished motor ability in their hands can also utilise it.

However, I decided to work on a different project for the final project instead, and will shelf this idea for now.

Phonotonic /Device #3


Phonotonic is a smart object and an app that changes motion into music, blending the physical and musical world together. By shaking the Phonotonic, corresponding musical beats, melody or sound effect will be blasted through external speakers. Different musical instruments can also be changed, by using an accompanying Phonotonic application

.phonotonic image

The Phonotonic sensor can also be removed, and placed onto other surfaces eg. parts of the body. Dance moves, or other motion, could thus activate more unique music playing. One can also opt to combine two or more Phonotonics, for a richer orchestra.

Personal thoughts:
It could be really useful for teaching music to children, or for therapy sessions. It’s compact size, along with its simple design, makes it easy for anyone to use it. However, the free movement required to play music with it has its con – the music played is hard to standardise should the same tune be required to be replayed.

See it in action:
(Duo Mode)


(Dancers with Phonotonics attached to their body parts) – Skip past 1 min

Our Personal Cocoon /Our Interactive Device


By Martin, Yi Xian, and Tania

Inspired by the SensorBox, our team decided to harness its sensory-qualities and implement it into a personal ‘cocoon’, or space, for each user.


How it functions:
– A smartphone sister application, and external sensors which will be placed around the house, recording and transferring sensory data (eg. air quality, light, humidity, length of conversations in house ie. noise measure) over to the Pod.
– The Pod then tunes itself to create a corresponding soothing environment for the user, based on the data collected. Eg. Hot weather, with long conversations and long periods of using laptop corresponds to: Cool temperatures, with blue hues and soothing music in the Pod.
– The user then steps into the Pod, which can stay upright in a sitting position, or lying supine. The Pod can then close up, and fully envelope the user.
– The Pod then further adjusts according to the user’s ‘live’ data – further fine-tuning the environment within the Pod. Eg. Music cuts off, or changes according to user’s preferences. These changes will occur on auto-pilot mode, but can also be adjusted manually.

We discussed about placing the Pod in public spaces, where visitors can geta ‘home away from home’, but thought that having a Pod for personal use at one’s own home could also be a good idea. The user can then wind down properly, after a day’s work.

We also thought that the Pod could work as a monitoring device, and placed at hospitals (or even jails). It could reduce the need for manpower, and improve administration.


Featured Image credits –