Alternatively, view it via here.
Alternatively, view it via here.
Updated as of 29 October 2016
Over the past 2 weeks, we have completed the code for the individual parts of ume. However, here are some concerns:
Light output is not bright enough to be seen under normal day light, we are considering of changing it to WS2812 LED. However, we will first be testing our ultra Yellow LED lights first before changing it to WS2812. (Just purchased Yellow LEDs today)
A short pause occurs when sensing continuous motion. Will be re-examining the code to resolve it
The following week will see us:
1. integrating the different codes into one code
2. continue building the ball
3. ume to work properly individually
4. attempting to connect 2 umes together
We are in the midst of building the physical components of ume, and have decided on using a pre-made hamster ball.
Reasons for choosing it to house our electronics was of it having slots for wires to extend out, and the sizing is optimal. There are a variety of sizes for hamster-balls, raging from 10-14cm. We will be buying it second-hand off Carousell, getting one with a 11cm diameter for a start.
Other than the hamster ball, we have also considered getting empty, clear see-through Christmas baubles from Spotlight. However, we opted for the hamster ball as the christmas baubles was a little too small to contain our breadboard, even though it looked aesthetically more pleasing that a hamster ball.
Final Project by Yi Xian and Tania
I made a little device that helps to tell you what temperature your cup of cuppa is from the colour of the led light strip. Basically, warmer -> reddish undertone, colder (and nearer to room temperature) -> blueish undertone.
However, the temperature sensor took quite some time to sense the change in temperature, hence it might be better to use it to test when your hot water has sufficiently cooled, rather than how hot it is. As waiting for the hot water to cool down has a somewhat similar (ie. longer) timeframe available for the temperature sensor to sense the actual temperature.
Up: Final Prototype
How it was made:
The temperature sensor poked through a tiny hole in the cardboard. Place the cup above this hole.
An example of different colours the LEDs were capable of. Blue (cold), Red (warm)
An Arduino Uno, and Adafruit library was used to make the code.
Google Glass, is an optical head-mounted display designed in the shape of eyeglasses, by Google X (now renamed X).
The Google Glass, when worn, displays information and allows the user to do various simple functions, eg. snap a photo, send messages or images, by activating a voice command or toggling the capacitive touchpad along the right side of the glasses. This information, in the form of images of text, will be overlaid onto a glass prism at the front of the glasses, without obscuring our current vision.
Watch how it works:
Other functions that the Google Glass does:
These information are then projected slightly above one’s line of sight;
Its metaphor, “looking into the future” is very suited considering the design of the product. The Google Glass eliminates an external device, eg. a phone or computer, and integrates it into a more convenient, consolidated device. However, I felt that its functions were only rudimentary, and does not warrant the $1,500 price tag.
[Cons] The price tag in question deters the common user, and ironically the product was made for the common man, in lieu of a commonplace/day item such as the handphone. Here, there is a mismatch in product function, with marketing cost of the product. 3 years after its debut, the Google Glass was deemed a failure for the mainstream market.
One particular function to note is that the feedback for the Google Glasses, is only limited to the user – eg the Google Glass is recording a live video, but other users will not realise it. Only users can see it from their projected, inner screens. Non-users may feel intruded upon, but perhaps this was what Google X wanted to achieve – a product that does not feel too much like a foreign product, hence they eliminated this feedback. It does not however bode well for other non-users, who may feel disengaged from the user himself. Another comment on the Google Glass was on its design – it looked too futuristic, and not so commonplace an item for one to use it daily, which ironically it was intended for that function.
Another design ‘fault’ which I disliked was that the projected screen was only situated on a single lens – I felt that if I were to use it, I would squint to focus on the screen – not very ideal, nor intuitive.
[Pros] Despite this, the product seems very intuitive – in navigation, wearing, and its outcome. Simple swipes (up, down, front, back) could be used to toggle the interface, making it easy to learn and manipulate. It’s worn over one’s eyes intuitively like a spectacles, and does not deter actual vision – a plus point. In addition, the function it offers, displaying a map, replying messages hands-free, creates a more ‘human’ experience without the need/feeling for another extended, foreign device.
Nevertheless, the Google Glass does exhibit unlimited potential, and can be used in more specialised fields, for instance telemedicine, teaching, or in conference calls or reporting. The unlimited potential of this Augmented Reality device can be tapped onto, and further adapted to suit our current needs.
levelHead is a spatial memory game by Julian Oliver.
levelHead uses a hand-held solid-plastic cube as its only interface. On-screen, it appears each face of the cube contains a little room, each of which are logically connected by doors.
The visual output is captured via a camera, and later overlaid onto the printed, checked. After which, the entire image (background and computerised overlaid graphics) and then projected onto a different, larger screen.
In one of these rooms is a character. By tilting the cube the player directs this character from room to room in an effort to find the exit.
Some doors lead nowhere and will send the character back to the room they started in, a trick designed to challenge the player’s spatial memory. Which doors belong to which rooms?
There are three cubes (levels) in total, each of which are connected by a single door. Players have the goal of moving the character from room to room, cube to cube in an attempt to find the final exit door of all three cubes. If this door is found the character will appear to leave the cube, walk across the table surface and vanish.. The game then begins again.
It is a very interesting idea, that a simple object can be transformed into a device, without the object itself having any technical aspect. Rather, the object, or in this case the device, acts more of a medium on which a screen-based projection is overlaid on. The blend between the physical object and the projection here is seamless, and feels intuitive enough for the user. Personally, it is a very clever and simple idea.
From the documentation video, I noticed that the projections were a little too small for the eye to look at, but in subsequent installations, a larger screen was used to circumvent this limitation. In addition, I found the concept of this art to be very relevant to the medium – the physical space of imaginary and unseen architecture (through a digital world), realised through physical muscle memory and brain memory, reflects how modern day memory formation is created, through artificial computerised means, yet still reliant on ‘traditional’ techniques.
It consists of:
– 5 x 5 x 5 cubes, unique image (marker) on each face
– Computer with LinuxOS
– Sony EyeToy Camera
– Clean White Surface
This was my initial proposal for the final project – as I was inspired by wanting to help children learn how to write, I decided to create a pen which would help to correct their grip, and assist them in practising writing out of letters.
In brief, the pen would vibrate when the user writes out of line, or if incorrect grip has been sensed. The Guiding Pen can also have an extended user database than simply children – stroke patients, or people who may have diminished motor ability in their hands can also utilise it.
However, I decided to work on a different project for the final project instead, and will shelf this idea for now.
Phonotonic is a smart object and an app that changes motion into music, blending the physical and musical world together. By shaking the Phonotonic, corresponding musical beats, melody or sound effect will be blasted through external speakers. Different musical instruments can also be changed, by using an accompanying Phonotonic application
The Phonotonic sensor can also be removed, and placed onto other surfaces eg. parts of the body. Dance moves, or other motion, could thus activate more unique music playing. One can also opt to combine two or more Phonotonics, for a richer orchestra.
It could be really useful for teaching music to children, or for therapy sessions. It’s compact size, along with its simple design, makes it easy for anyone to use it. However, the free movement required to play music with it has its con – the music played is hard to standardise should the same tune be required to be replayed.
See it in action:
(Dancers with Phonotonics attached to their body parts) – Skip past 1 min
By Martin, Yi Xian, and Tania
Inspired by the SensorBox, our team decided to harness its sensory-qualities and implement it into a personal ‘cocoon’, or space, for each user.
How it functions:
– A smartphone sister application, and external sensors which will be placed around the house, recording and transferring sensory data (eg. air quality, light, humidity, length of conversations in house ie. noise measure) over to the Pod.
– The Pod then tunes itself to create a corresponding soothing environment for the user, based on the data collected. Eg. Hot weather, with long conversations and long periods of using laptop corresponds to: Cool temperatures, with blue hues and soothing music in the Pod.
– The user then steps into the Pod, which can stay upright in a sitting position, or lying supine. The Pod can then close up, and fully envelope the user.
– The Pod then further adjusts according to the user’s ‘live’ data – further fine-tuning the environment within the Pod. Eg. Music cuts off, or changes according to user’s preferences. These changes will occur on auto-pilot mode, but can also be adjusted manually.
We discussed about placing the Pod in public spaces, where visitors can geta ‘home away from home’, but thought that having a Pod for personal use at one’s own home could also be a good idea. The user can then wind down properly, after a day’s work.
We also thought that the Pod could work as a monitoring device, and placed at hospitals (or even jails). It could reduce the need for manpower, and improve administration.
Featured Image credits – https://www.behance.net/gallery/3838729/Telepresence-Pod-The-Cocoon
Belty Good Vibes is a wearable device created by Emiota, a French start-up, and it is available for pre-order from $395 onwards.
The smart belt – called The Belty – connects to the user’s smartphone to set up different preferences based on things like sitting or standing.
The Belty has a number of motors built in, and it will automatically tighten when the wearer stands up, and loosen when he sits down. It will also loosen if the wearer has eaten too much.
Apart from the most logical use, The Belty can also track the user’s activity via built-in accelerometer and gyroscope.
It knows exactly how much you’ve been moving and if you should be more active in your life. It will also give you a nudge if you’ve been sitting for too long.
In addition, Belty comes with a sister application, to showcase data recorded by the device, and help the user plan for a healthier lifestyle.
According to an article on IT ProPortal, the founders of Emiota believe (that) we shouldn’t be creating new wearable things, but add technology to what we already wear, including shoes, glasses, gloves or belts. This interesting opinion was also noted by Prof Demers, who mentioned that by utilising items (technology) that are in use, users already have a preconceived idea of how to utilise the item. It thus assists in userbility, making it easier for the user to interact with the ‘newer’ item.
Here is a youtube video showcasing the prototype:
The Necomimi is a wearable device which senses the user’s brainwaves and reacts accordingly. The neko (cat)’s ears, at the top of the band wiggles and changes direction in accordance to the brainwaves sensed.
How it works (Official Company Statement):
Step 1: Neurons firing in the brain give off electrical impulses, which are picked up by the forehead sensor.
Step 2: The Necomimi headset captures brainwave data, filters out electrical noise from the environment, and interprets it with NeuroSky’s Attention and Meditation algorithms.
Step 3: Your mental state is translated into ear movements and shared with those around you!
The necomimi adopts a very simple, user-friendly design – ear-like extensions of both sides, and a protruding sensor to sense EEG. The necomimi perches on a user’s head, the same way a headband does.
It is a visual representation of our brainwaves, and like our feline friends who communicates via body language, the necomimi tries to emulate this unspoken communication through the realistic depiction of cat’s ears.
In my opinion, the necomimi is definitely appealing to the common crowd, especially the animal-lovers. Its cute and simple design makes it easily an accessory to your common day wear. However, beyond its novelty, there is little practical use for it.
Perhaps, more additions could be made to it – to increase its user mileage, and interaction between fellow necomimi users, the necomimi could:
– light up when other users are near, prompting the user to interact with fellow neco enthusiasts
– include other sound effects, or changing in colour, ie. more output feedback