After doing some fun versions of the Windbot, I decided for my final to be a Cat Toy that one can control from a distance away. Attaching a fluffy ball on a string to the end of the WindBot, the ZigSim controlled bot moves around erratically using gyro sensors on the phone, connecting to the servo motors using arduino. There are three 180 degree motors, each one reacting to one axis of the sensors on the phone.
Introducing KetPlay, a cat toy that allows you to play with your cat while still remaining at your desk for those boring meetings. KetPlay only requires your phone and one hand, letting your cat and you have fun even from afar.
Testing of the bot using ZigSim
Cowboy on top of the stick, then having a WindBot fight with @Jessie’s WindBot.
For Behaviour Bot, rather than a music piece, I decided to go with clips from movies. Following the Halloween season, I decided to choose between Frankenstein or Corpse Bride, and finalized on the latter. Watching the movie again, I found the proposal scene where the main character practices his proposing rather fitting for the project, and chose that audio clip.
Using Ableton and Arduino, I attached servo motors to a skeleton and programmed the robot to move according to the audio, creating “A Confession from a Skeleton”, where lonely souls can get a cute proposal from a skeleton during this Halloween season.
The head movement is made using two strings attached to each side of the skull, causing the head to get pulled left and right when the servos move accordingly. The arms are lifted by creating a sleeve made of straws on the arm, and then attaching a stick that is connected to a servo on the other end. As the servo moves, the sleeve slides up and down the arm, pushing the arm up and down accordingly.
Motorized character donned in Traditional Chinese funeral clothing, with no physical body, dancing to music. Platform decorated to look like a ghostly Chinese funeral scene with incense and papers burning.
– Moving based on audio input – Using ableton?
– Puppet show – reenacting wedding scene?
Ghost Weddings are practiced to ensure the unmarried dead are not alone in the afterlife. It was originally a ritual conducted by the living to wed two single deceased people, but there are also rituals involving one living person being married to a corpse.
Traditional reasons for Ghost Weddings include continuation of family’s lineage in the case of men and maintaining family honour in the case of women as it was viewed as shameful to be the parents of an unwed daughter, and unmarried girls were often shunned from society. The weddings are taken seriously and most often than not, many factors are taken into consideration before matchmaking two parties, such as age, family background and opinions of feng shui masters.
In ghost marriages between two dead people, the “bride’s” family demands a bride price and there is even a dowry, which includes jewellery, servants and a mansion – but all in the form of paper tributes.
The wedding ceremony will typically involve the funeral plaque of the bride and the groom and a banquet. The most important part is digging up the bones of the bride and putting them inside the groom’s grave.
While it appears to be an old tradition, it happens even now in certain parts of China. The ritual has mutated over the years, resulting in secret rituals, numerous cases of grave robbing and even murder cases for the sake of these Weddings.
It’s Alive – Frankenstein
Using the audio from 1931 Frankenstein, in which the doctor animates the monster. A brain heart and skeleton moves to the sounds of the audio, reenacting the creation of a monster created through science and technology.
Frankenstein touches on the idea of playing god, and the moral and ethics in artificial life and experimentation. In a day and age where we move towards artificial intelligence and genetic engineering, the message Frankenstein relays till remain applicable.
In future, eye check ups can be done by online ordered contact lenses/store bought lenses, that help to check one’s degree and eye health. Data can be assessed through a site that allows users to view their results. Should anything unusual arise, one can send their data to an eye specialist to receive feedback or notification if they need further physical check ups.
The item is a pair of contacts with sensors and detectors, packaged neatly in box. Users can easily purchase them, sparing them the trouble of having to go to an optical shop to get check ups. In addition to being to tell one’s degree, one will also be able to check their eyes for any problems such as cataract or glaucoma.
How to use:
When wanting to check one’s eyesight or eye health, simply order it online to get it delivered to your home, or go to a nearby clinic to purchase a set of these contacts.
Put on the contacts and wear them for a couple of hours to allow it to gather the essential data it need to help the user. If the user has degrees, they would need to wear spectacles for the day.
After wearing them, one can access their data through the QR code on the box, and then typing in a code for each special pair of lenses to get their individual data.
Should anything arise, they may choose to contact a specialist and book and appointment.
This device allows users to basically check their eyes while continuing their usual daily lives, without anything seeming out of place apart from some who have to wear glasses for the duration. It would be a new step into the future where most things, such as check-ups, are integrated into our lives seamlessly, with little disturbance to the usual activities of one’s day.
People can do check ups anywhere they like, easily, and as mentioned in my post for the DOW, medicine may also end up being distributed in the same seamless way.
Prosthetic or Limb Sleeve that make u move the opposite of where you want to go to, changing our usual intuitive movements to one we have to make a conscious effort to do. It is meant to be an annoying device, but also serves as an interesting experience in allowing users to feel how it is like to not be able to move their limbs without much thought. Users have to take time to learn how to walk again, like how prosthetic users usually require large amounts of time to get used to their prosthetic.
The device is a simple glove to be worn tight-fittingly, with sensors at the essential points to move the users arms with neuromuscular electrical stimulation. A headband with sensors attached will also worn by the user to note how they initially intended to move, so that the device may make them move otherwise.
How to use:
Users will wear the sleeve and the headgear, then just “move around as they would like to”, though the device funnily prevents them from doing that exact same thing.
This device is not only made on provotype grounds, but also speculative design, as I am unsure if there currently are means to do this with quick and instant relay. The device has to capture the intended movement of the user and instantly send pulses to go the other way. One potential way is to train the device the different pulses needed to move in a certain direction and call these pulses when the user’s neural messages say to go a particular direction.
Made to help amputees feel again from their missing limbs, SENSY from SensArs is a interesting prosthetic device available worldwide, made to restore life-like sensory feedback from the prosthesis of amputees or from the limb of patients with damages to the peripheral nerves. The neuroprosthetic is implanted within the residual nerves of the amputees, or the healthy part of their nerves to restore the natural flow of the neural sensory information.
Those who have lost their limbs in unfortunate events would thus be able to return a bit more to their previous way of living, perhaps easing them slightly from the mental and body fatigue of their accidents. This also helps subjects feel more natural again, tackling the problem of the phantom limb syndrome that may possibly arise. Over the course of a one-month therapy program with neurostimulation, scientists managed to considerably reduce phantom limb pain in one of the patients who complained of having it and in the other, the pain disappeared completely.
The prosthetic leg device, SENSY, is able to sense various parts such as foot touch, pressure, and knee joint angle with its external sensors. These sensed signals are then transmitted back to the nervous system using a set of stimulation electrodes implanted into the tibial nerve.
SENSY consists of an implantable intraneural electrode for sensory nerve stimulation, an implantable stimulator and an external smart controller. This smart controller, which is the “brain of the system” can be connected to the sensors embedded in the prosthesis or to a sensorized glove or sock, and receives information from the prosthesis and transduces it in the language of the nervous system, instructions of stimulation, which it sends to the implantable stimulator.
In other words, Artificial sensors are implanted to connect to intact nerves, stimulating response in the brain as if there was an intact nerve in a limb. The sensors are connect to wires simulating an actual nerve, and those wires are implanted and connected to actual nerves within the body. Between the artificial sensors and the residual nerve is an implantable neurostimulator which is bidirectional, sending and receiving signals from both the intact nerve and the artificial sensors.
The device was created with several options of usage:
Being simply a sort of neural sensory pacemaker if the user does not have prosthetics
an excitable device like a sensor which also sends a signal to the nerve.
Being a sensorized prosthetic
Being a sensorized glove or sock over intact limbs but with nerve damage
These socks and gloves contain sensors within the fabric which act essentially as sensitized skin, also sending signals to an implanted device which communicates with the intact nerves
To make this device, the scientists tested attached tactile sensors to the sole of commercially available prosthetic feet, and collected knee movement data. They then placed tiny electrodes in volunteers’ thighs, connecting them to residual leg nerves, trying to introduce eletrodes inside the nerve to allow the restoration of a more natural sensory feedback.
The research team then made algorithms to translate information from their sensors into current impulses that the nervous system reads, delivering them to the residual nerve. The signals from the residual nerves are conveyed to the person’s brain, which is thus able to sense the prosthesis and helps the user to adjust their manner of walking accordingly.
From the test they received positive feedback from volunteers of the device, with many mentioning it was less mentally strenuous to use and also gave patients more confidence in themselves.
I think the device is good as it gives those who have unfortunately lost a limb a chance to return to the way of living before their accident. Those who have lost their limbs in accidents most probably face the stress of losing a limb they were previously reliant on, and now, instead of having to change their entire lifestyle, they could still continue the path they were moving on. Usual prosthetics may help in providing some comfort by helping the users in feeling whole again but it may also go so far, as the senses in the lost limb are not felt by the user. This device helps users to come close to feeling as though they had never lost their limb, easing their pain.
I think SENSY is also incredible as it works in various ways. The adaptability of the device ensures a large range of users reap the benefits from their products, as those with varying problems from having nerve damage to missing an entire limb are all covered with SENSY.
Perhaps SENSY can go one step further and work with researchers studying how to make more human-like artificial skin to make the prosthetic even more similar to an actual limb. One example of an interesting skin made is by RMIT, that can electronically replicate the way human skin senses pain. The device mimics the body’s near-instant feedback response and can react to painful sensations with the same lighting speed that nerve signals travel to the brain.
Features of the electronic skin:
Stretchable electronics: combining oxide materials with biocompatible silicone to deliver transparent, unbreakable and wearable electronics as thin as a sticker.
Temperature-reactive coatings: self-modifying coatings 1,000 times thinner than a human hair based on a material that transforms in response to heat.
Brain-mimicking memory: electronicmemory cells that imitate the way the brain uses long-term memory to recall and retain previous information.
“It means our artificial skin knows the difference between gently touching a pin with your finger or accidentally stabbing yourself with it – a critical distinction that has never been achieved before electronically.”
Hence, if they are able to work with RMIT, they could most probably create an artificial limb akin to a real one, allowing amputees get back their limbs and feel no difference.
All in all, SENSY is amazing as it is able to in a way replicate an actual human body part using technology and allow users’ neural signals to work with this artificially made device. Considering this, it feels as though advances in technology is bringing us one step closer to a world of cyborgs. Perhaps in the near future, one can easily replace damaged limbs for better artificial ones.
The Arable Mark 2 is an all-in-one weather station, crop monitor and irrigation management device. The device installs in minutes, deploys with the push of a button, and requires no maintenance. Able to synthesize climate and crop data, the solar-powered device allows users to gather actionable insights for their crops in all growing conditions. Data observed would then be combined with historical data to deliver Point Forecasting – a unique machine-learning algorithm for accuracy, which then provides daily predictions.
Features of the Arable Mark 2
The Arable Mark 2 has a full sensor suite which delivers more than 40 climate and plant metrics. This includes Temperature, Humidity, Pressure Solar Radiation, Precipitation, Daily Evapotranspiration (ETc), Chlorophyll Index, NDVI and more. It can tell from looking down at the crop if it needs fertilizer or water, and it detects the presence of rain, temperature changes, solar radiation, and what is happening in the soil. It also has increased sensor accuracy and expanded cellular connectivity compared to its original, not mentioning extended battery life and a protective UV coating to withstand harsh conditions.
It is also able to capture scientific-quality measurements across locations with the only platform to combine meteorological and plant data in one place.
After data is collected, users are able to access their real-time field data easily using their mobile phones, on a website and API with Arable’s intuitive, user-friendly software platform. These data collected is then combined with a previous data. Then, using a highly accurate machine learning algorithm, the platform provides hourly and daily predictions up to 30 hours or 10 days ahead. Covering 12 climate zones, Arable’s global network of 30 calibration-validation sites ensures the Point Forecasting provides accurate and reliable data for each user’s needs.
Installation and Maintenance
Simply use a metal pole and add the Mark 2 above to ensure it is in optimum height, then check that the sensors on the bottom of the device are pointing north. Firmly press the top button on the Mark 2 for 3 seconds to initiate deployment. Blinking blue lights shows the device is connecting. After a few minutes, all four lights around the Arable logo with illuminate green, meaning the deployment sequence is complete.
To maintain healthy charging potential, simply use a clean cloth to wipe the solar panel of the Mark 2 of any dust or debris.
In other words, the Arable Mark 2 is a device that offers an integrated analytics platform with weather monitoring and plant health data all under one roof. It provides straightforward, plant-based measurements that are relevant to each individual users, allowing them to make informed decisions with unprecedented ground-truth accuracy, delivered in real time.
The Arable Mark 2 is a convenient and effective device for farmers and crop owners and also would be useful for the Singapore government’s plans to do urban farming (planting vegetables and other food crops on rooftops of HDBs). Since crop farmers have to take care of large and wide piece of lands, sometimes in varying places, the Arable Mark 2 allows them to be able to monitor their crops easily, and make changes or take precautions based on the paired predictive analysis software. Thus, this would aid them greatly in ensuring they reap good yields from their crops, as the software would provide insights on actions to take to ensure their crops grow healthily.
Since the Mark 2 is an all-in-one device, users also need not buy an array of different sensors to check on their crops. Instead, just this single device helps them to get all the data they require. This helps them to save the problem of having too many sensors to check, or too many regions to check.
Its easy deployment and low maintenance also adds to the convenience for the users, and the device’s strong coating allows it to pull through different environments, letting the users save time and efforts to clean these individual sensors.
The device being able to show real time information also is a good feature as the users will thus be able to take immediate action should any unexpected changes or situation occurs. Rather than only realizing possible problems later, real-time data can help with finding out the root of potential problems immediately.
The device is attached to a single pole which is stuck into the ground, and thus, I would raise the question of whether heavy winds and rain may potentially shift the device out of alignment. As mentioned in the Installation segment, the sensors on the bottom should be facing north for accurate data collection. However, strong winds may easily change this.
Another thing is this device system is highly dependent on cellular network and thus would not function in areas where signals and networks cannot reach. If the network in a particular area is bad, it may also potentially affect the data collected and cause inaccuracy.
Made to be non-invasive, the Sensimed Triggerfish is a continuous ocular monitoring system that uses a small telemetric sensor on disposable contact lenses. It is used to monitor glaucoma patients who are at risk of progression by capturing spontaneous changes in their eyes, allowing physicians with important information on their patient’s condition.
Monitoring the patient’s eye for 24 hours, the Sensimed Triggerfish provides a full image of the patient’s eye during their normal day. Since vision loss occurs at different rates for different people, this device allows doctors to help determine if the loss in vision is progressing at a fast or slow pace. Unlike normal optical appointments, the contact lenses provide the physicians with access to the changes in pressure, and volume of the patient’s eye, and also if there are any stress.
This allows specialists to visualise the patient’s IOP( intraocular pressure ) continuously. As mentioned Swissmed.Asia:
The data provided by the SENSIMED Triggerfish® complements punctual tonometer measurements and offers a qualitative profiling of the patient’s IOP for up to 24 hours. The pattern reproducibility of an individual patient’s profile allows for the optimisation of the glaucoma management*.
The Sensimed Triggerfish also helps doctors to determine the right treatment for the patients. There differing treatment levels for glaucoma, ranging from simply using eye drops to invasive surgical treatment. Thus, doctors would need to evaluate the severity of the situation before deciding on the appropriate treatment. This would be done effectively using a 24-hour monitoring system to follow the patients condition.
Patients would wear the device for a maximum of 24 hours, along with an adhesive antenna worn around the eye. Data is wirelessly transmitted from the contact lens to the antenna, and this data is then received by a portable data recorder worn by patients. This recorder then transmits the data via Bluetooth to the the physician’s computer.
The pros of this device is that it is non-invasive and can record and report changes in real time, allowing for quick response to any situation and also providing doctors full coverage of the condition of the patient. Unlike before, where patients have to repeated return to the clinic for multiple checkups, the device is more convenient for glaucoma patients, allowing them to have fewer physical checkups, yet still knowing their physicians are well-aware of their situation. It also does not interfere with their day to day activities.
In my opinion, the Sensimed Triggerfish is a great device as it targets the problem at hand directly, with a small convenient wearable device. Instead of large machines that need to be situated in clinics that patients have to repeatedly travel to use, such devices allow patients with more freedom amidst their checkups and diagnosis. This could definitely be extended to other health problems, such as diabetes. Patients who require often medicines would greatly benefit from wearable health devices.
One example would be the TheraOptix, created by Harvard Medical School. These are a pair contact lenses made to dispense medication directly into the eye of the patient over periodically over the course of days or weeks.
Such lenses could be paired up with the Sensimed Triggerfish, allowing patients to say goodbye to inconvenient continuous usage of eye-drops and repeatedly traveling to clinics for check ups. Since the elderly are at higher risk of glaucoma, this reduction in traveling and movement would be useful for them, as they would not be required to exert themselves as much as before.
I also really like the idea of the device being a pair of contact lenses that would be hard to be distinguished by others if they did not pay attention. While it is true the antenna makes the wearer stand out, perhaps if the technology advances further for the data to be sent directly to the portable data recorder, it would allow patients to have a smooth day without any glances from passersby. This may not be the intent of the device (to make the data recording inconspicuous), but medical devices being smaller and less obvious would perhaps allow users to feel more at ease and comfortable without the unneeded attention.
Some small cons of the Sensimed Triggerfish is that it has no optical correction, and thus patients with degrees may have to either carry on their day with blurry vision. Dry and red eyes is also said to be a common problem, though this could be solved with some eyedrops.
While the above two cons mentioned may bring some discomfort, I believe the pros of the device currently outweighs these cons. Allowing physicians to full data of the patient means that these patients would also receive better and more appropriate treatment. Thus, the Sensimed Triggerfish still proves to be a useful tool in this sense.
Yet, we must consider that the device may not be very suitable for their target audience. Since the target audience are patients with glaucoma, which are mainly elders, is the contact lenses suitable for those of much older ages? I believe those around their 50s would well benefit from this device and be able to use it with ease. However, those older, perhaps 65 and above, may not be able to wear the contact lenses well.
Those who seldom wear contact lenses may also find themselves using more time to wear the device rather than visiting the clinic. It may also not be comfortable for those who are not suited for contact lenses.
All in all, I think this device can be considered a breakthrough in health devices but definitely has more room to improve and expand on.
On the 10 March, we had a guest lecturer, Bin Ong Kian Peng, share about artificial intelligence, machine learning and a utopian world. Sharing many examples of technology and machine learning in art, the idea of a technological utopia was introduced to us, and the question of whether AI would result in a utopia or dystopia was posed.
The lecture was enjoyable and eye-opening, allowing us to be introduced to many new interactive artworks. Out of these works, there were many that were highly intriguing and thought-provoking, such as Refik Anadol’s Melting Memories exhibition, that projects the data collection translated from the process of memory retrieval, and Alexandra Daisy Ginsberg’s The Substitute, that explores a paradox: our preoccupation with creating new life forms, while neglecting existing ones.
Refik Anadol’s Melting Memories
Alexandra Daisy Ginsberg’s The Subsitute
However, at the end of the day, the one that pushed me to wonder more about humanity, AI, and human’s idea of a Utopia was Doomsday Book’s The Heavenly Creature by Kim Jee-Woon.
Doomsday Book’s The Heavenly Creature
The movie talks about a robot that reaches enlightenment on its own while working at a temple. Its creators regard this phenomenon as a threat to mankind and decide to terminate the robot, stating that this is a glitch in the robot. Arguments rise as to whether robots can achieve enlightenment, and the movie suggests the lines between humanity and robots are blurred, and whether enlightenment is achieved is also relative.
In the film, the robot that achieves enlightenment states the following:
To perceive is to distinguish merely a classification of knowing. While all living creatures share the same inherent nature, perception is what classifies one as Buddha and another as machine. We mistake perception as permanent truth and such delusions cause us pain. Perception itself is void as is the process of perceiving.
He goes on to say that perhaps all humans had already achieves enlightenment and he, a robot, sees this world as beautiful.
Such an eye-opening statement, and it is no doubt full of truth.
It is said that humans are only able to be “bad” or “good” because of our ability to differentiate the two and choose to do either. The choice of doing something deemed “morally wrong” causes us to become “bad”. Likewise, perhaps it is the perception of “success” and “failures” that also leads us to believe that there is more to the world, more to achieve before one can reach enlightenment.
Yet, the truth is that the world is how it is. And the robot in the film, who takes it as it is, not perceiving or classifying the world around him, as such does not crave anything more. He does not have worldly desires, as he takes what he has as it is, thus achieving enlightenment.
However, following this, I do disagree with one part of the show, that heavily influences whether or not I believe AI can help create a Utopian world.
In the movie, they say that the lines between humanity and robots are blurred, and that humans always had achieved enlightenment. It is our mistaking of perception as permanent truth that hurts us and thus fail to see things as they are. While this much is true, I believe that humanity and robots are fundamentally different as humans will almost never be able to detach seeing their perception as the truth.
It is said that “the fear of the unknown” is what every human is afraid of, and thus it is a continual cycle that we attempt to fill this void with what we perceive the truth to be, whether or not it truly is. A truly emotional feeling.
Unlike us, robots do not have such a fear and should they perceive the world and input “emotional feelings” towards certain things, it is also due to the programming by humans who have inflicted biasness onto their coding. Regardless of how much machine learning or how intelligent a robot may be, they are most often than not, highly objective, and lack the emotional classification that humans have. Thus, humans and robots are different.
While perhaps the objectiveness of the robot may potentially help to achieve society of maximised benefits in a “utopian” world. The so called “maximised benefits” may not be the best outcome for humans. Robots that does what they deem is necessary for the society may conflict with what humans consider “good”.
This perhaps can also be explained in movies that have AI’s actions conflicting with what is good for humanity. For example, in 2001 A Space Odyssey, HAL receives a conflicting command, and while it had chosen the “best” way to solve the conflicting command, it had resulted in multiple deaths. HAL’s actions had been solely on the basis of meeting his goal and not out of morality.
While this is an extreme example, what this means is that an AI utopia can easily become a dystopia if an AI is highly intelligent at accomplishing its goal but the goal not necessarily aligns with ours. This is also why so many AI dystopian movies exist.
Of course, one may argue that machine learning could help a robot to learn moral reasoning as well, and ensure the safety of all of humanity. Yet, as said in Bin Ong Kian Peng’s lecturer, according to Lyman Tower Sargent,
Utopia’s nature is inherently contradictory, because societies are not homogenous and have desires which conflict and therefore cannot simultaneously be satisfied.
Conflicting objectives, and different perception of truths result in difficulty in creating a utopia. Perhaps if one day humans create a robot so powerful, through machine learning, it is able to consider the “utopia” of every single individual in the entire world, and find a way to create a world with that objective made, then perhaps, an automated utopia will be able to be made.
Inspired by the microscopic structure of the Morpho Butterfly’s wings, the textile appears a shimmery cobalt despite its lack of pigment. The dress’s iridescent hue is a trick of the light, and is manufactured by Teijin in Japan.
A native of the South America rainforest, the Morpho is one of the largest butterflies in the world, with wings that span five to eight inches. The vivid colour on the upper surface of their wings is the result of microscopic, overlapping scales that amplify certain wavelengths of light while cancelling out others.
Similarly, Morphotex relies on fibre structure and physical phenomena such as light reflection, interference, refraction, and scattering to produce its opalescence. The fabric comprises roughly 60 polyester and nylon fibers, arranged in alternating layers that can be varied in thickness to produce four basic colors: red, green, blue, and violet.
A group of Pennsylvanians researchers from PennState implemented a new way to produce fabric in order for it to be self-healing and act as a barrier between the bearer and the outside world. By dipping the fabrics in several liquids, they create layers of material that then form a polyelectrolyte layer-by-layer coating.
Similar to polymers present in Nature in the form of squid ring teeth proteins, positively and negatively charged polymers compose the polyelectrolyte coating. These quite amazing proteins already inspired the team to create a self-healing plastic last year.
“ … Fashion can be kinetic, dynamic and almost living expression of our unique experience with nature. I strongly believe that Fall can influence the fashion world to become more dynamic and to increase the way clothes can react to the world around them,” Özkan said. “I want clothing to have more responsiveness to the environment, so that instead of people always change their clothes, the clothes can sometimes change themselves.”
Birce Özkan began the garment design process by asking herself: “What if when the temperature got hot suddenly, our clothes would start to break apart in response? What if they had the skill to behave depending on the surrounding conditions? What if garments had the ability to sense the environment just like living organisms?”
Trees naturally shed leaves depending on the temperature and light. So, Özkan created an interactive garment that does the same. “In the fall, as the days shorten, and the temperature gets colder, the trees, without the light they need to sustain their chlorophyll, shed their leaves to keep their energy to survive for the winter ahead,” Özkan said. “This process was the inspiration for creating my garment’s mechanism. To prepare for the fall of leaves, trees activate “scissor cells” that split to create a bumping layer that forces the leaves out of place, destabilizing them so that they fall.”
Özkan used the same process for her garment that trees use. Light activates small motors in the garment. The motors speed up when there is less light and make the “leaves” fall from the garment. The motors are attached to steel wires, and the wires connect to holes where leaves are attached with wax. When there is less light, the motor pulls the wire which breaks the wax adhesion and makes the leaves fall down. Özkan said she believes the piece will help people have a greater appreciation for the earth.
Birds have a biological compass that tells them what direction to fly during migration. Their compass is guided by the earth’s magnetic field.
“This gives them a freedom that humans lack. Instead, humans become more dependent on their mobile phones to find their bearings. This dependency limits the awareness of their surroundings and denies them of some experiences,” Özkan said.
So she created a jacket that imitates a birds’ internal compass. The jacket uses an electronic compass and embedded motors that make the feathers on the shoulders rise up when the wearer walks north.
Black cockerel feathers are attached around the collar of the jacket and fully cover the skirt. Both are made from a dark cotton-based material and feature an integrated electronic compass, which is connected to motors on the end of the feathers. When the compass detects that the wearer is facing north, the plumage is raised up by the motors to look like a bird flapping its wings.
“During my research, what I found out is that when humans lose their way, the easiest way to reorient themselves and find their way is to face north and visualise the map,” Ozkan told Dezeen.
“If the wearer loses their way, the skirt or jacket helps to find where is north and then when they face their body north, they visualise a map and can navigate based on memory.”
The next step for her Augmented Jacket and Skirt is to link up the clothes with Google Maps so the feathers respond to programmed routes.
The feathers on the left side would flap when the wearer needs to go in that direction, and the opposite side for turning right.
“You can write down the address and then the feathers will guide you if you need to turn right or left,” said Ozkan. “In that way, people don’t have to be dependent on their mobile phones and can be more aware of their surroundings and allow them some experiences.”
Her collection came cut in classic menswear fabrics like houndstooth, pinstripe, and Prince of Wales but also in those reminiscent of “Beetlejuice stripes” or cartoon bright floral prints that could be found in a kindergarten.
From there Kawakubo played with volume and proportion to create another one of her customarily challenging collection.
So that a suit would show up with whorls of abstract fabric flowers swelling out of it, a jacket would feature odd shaped inflatable balloons in a matching fabric that would incased the sleeves or the shaggy layering of strips of fabric was another way the designer also bulked up a few of her ensembles. And the idea to flatted the pants between the legs (as if the models were using their thighs to keep them hanging straight) was so off the wall only Kawakubo could pull it off.
Gucci F 2004
Yiqing Yin F/W 2012/13
Yiqing Yin’s new Autumn-Winter 2012-2013 collection re-imagines the female form in a world of purely mineral and vegetable composition.
The designer’s hand remains assuredly her own throughout excursions into bold and new galaxies of ever-shifting shades and shapes where tones blend and merge. Red remains pure while a slate grey betrays celestial glimmerings. Silver and blue fade into one.
Inviolably light cascades of satin and muslin offer oblique suggestions of chasteness through their mounting layers.
While researching for inspiring interactive artworks, I chanced upon a plethora of different installations, and much to my amusement, realised the broadness of the term “Interactive Art”. Ranging from contemplative to experiential, these works could take different forms – music, dance, digital. To choose simply one work from this range was difficult, and thus I will be showing a few more. Some of these works, albeit old, still hold much value and are applicable even today.
Alex Davies’ Dislocation (2005)
Dislocation is an interactive installation in which reality and the virtual mix. Perceptually real virtual characters intermingle with exhibition audiences as they look into their chosen portals, subverting the traditional ‘seeing is believing’ ethos of traditional video.
The audience enters an empty gallery room with four individual portals set into one of the walls. As they peer into one portal, a simple real-time closed-circuit video feed of the room they are in is shown. Using audio and locational data, the video images are digitally composited with images of pre-recorded video characters, creating an illusion of additional characters in the area.
“The auto-voyeurism of watching your own image is given an uncanny and disturbing twist when you also become the unwitting observer of a number of different scenarios that are apparently being played out in the room behind you. As you watch though the portal, you may see a man enter the room and walk up behind you or a young couple come into the room and start kissing, or a security guard enters with a barking dog. This uncanny sense of bodily presence behind you and your own possible vulnerability to these presences induces you to turn around to look behind you but when you do you are confronted with an empty room.”
Playing with the idea of appearance and reality, the real images of the viewer and virtually composited characters occupying the same viewing plan causes the viewers to be unable to trust their own eyes. Many state that one believes what they see, but the apparent reality is unfortunately shattered when the viewer turns around, only to face an empty space when they expect to see someone or something else there.
With this current age of virtual reality and augmented reality, it seems the day where one cannot tell reality from the virtual is not far away. With good rendering or composited images, it has become far too easy to edit and change images and videos one used to be able to use as ‘evidence’ and the ‘undeniable’ (now doctoring images and videos are common). This blur of truth and lies is highly applicable to this day and age of the digital, and the way the viewer would feel unease and begin to doubt the truth of the events in the room is highly intriguing. While the set up being simple and intuitive for the audience and participants, the contrastingly deep and inner fear of the unknown embeds itself into their hearts, definitely causing a long-lasting impression.
Giver of Names by David Rokeby is an installation that aims to challenge the viewers preconceptions of objects and push them to speculate and contemplate more. It hopes to represent a re-interpretation or alternate interpretation of the visual image of an object. An additional aim is highlighting the tight conspiracy between perception and language, bringing into focus the assumptions that make perception viable, but also biased and fallible, and the way language inhibits our ability to see.
In the room stands an empty pedestal, and a small video projection. A video camera observes the top of the pedestal. The installation space is full of random objects of many sorts. The visitor can choose a single or numerous objects from the space and place them on the pedestal. With the object/objects placed, a computer takes an image and performs multiple levels of image processing.
These includes outline analysis, division into separate objects or parts, colour analysis, texture analysis, etc.
These processes are visible on the life-size video projection above the pedestal. In the projection, the objects make the transition from real to imaged to increasingly abstracted as the system tries to make sense of them.
The results of the analytical processes are then ‘radiated’ through a metaphorically-linked associative database of known objects, ideas, sensations, etc. The words and ideas stimulated by the object(s) appear in the background of the computer screen, showing what could very loosely be described as a ‘state of mind’.
From the words and ideas that resonate most with the perceptions of the object, a phrase or sentence in correct English is constructed and then spoken aloud by the computer.
The phrase is, of course, not a literal description of the object. At the same, time, it is definitely not a randomly generated phrase. Everything that the computer says in some way reflects its experience of the objects. However its experience is in many ways quite ‘alien’. For example, it has no human real experience of the world. It has not burned its hand, scraped its knee, been hungry, angry, fallen in love, wanted something it couldn’t have. It does the best it can to talk about the objects from its very particular point of view. If you spend some time with the Giver of Names, you tend to find that the peculiarities of its perceptions and its speech begin to coalesce into a tangible and coherent character. Misused or mispronounced words become the character of a dialect.
At first glance, before reading the meaning behind this artwork, it reminded me of the Asian tradition of letting your kids pick an item from a range of objects to predict their future when they reach age one, thus standing out to me greatly. The idea of a phrase appearing upon analysing the objects was very similar to the tradition and it was interesting that the words created did not make much sense, allowing for newer interpretation and unique ideas.
“Pillowsongs is a sound installation exploring sleep and rest as a space for listening. Recordings mastered on eight different compact discs were mixed into speakers embedded inside pillows on beds installed throughout a darkened exhibition space, lit only dim blue light-bulbs.
Listeners hear these sounds by resting their heads on the pillows – resulting in a very intimate and ‘inside your head’ listening experience. The soundtracks combined field recordings, electronic drones, voices and short-wave radio transmissions. The programming of the CD tracks changed from day to day.
The slowly reconfiguring sound textures, dark lighting, and restful means by which the audiences engage with the work, engage listeners in a highly intimate, and hypnotic hypnogogic listening experience. Listeners often reported a high degree of uncertainty as to which sounds where coming from the pillows, and which sounds had emanated from outside the gallery space. Falling asleep can be an appropriate way of interacting with this work, given our ability to perceive sounds whilst in certain stages of sleep.”
In his sound installation Pillow Songs Poonkhin Khut has created an environment that seems strangely disconnected from the outside world… A violet light-bulb hangs over an unadorned bed, staining the white cotton sheets an iridescent blue. Somewhere a dog barks. Warily negotiating the shadows one becomes aware of other beds which are vaguely reminiscent of dormitories, cheap hotel rooms or convent cells. In the darkness the beds evoke a sense of familiar intimacy and the plain sheets reveal a sensuality which belies their ascetic frugality. Sounds emerge from the pillows like memories made manifest or half-forgotten dreams exposed and rendered audible.
Lying on the rough cotton sheets the inevitable association of light illuminating the darkness to traditional representations of transcendence is thwarted. Instead an overwhelming sense of the temporality of life marked only by fleeting sensations, thoughts and lingering memories is evoked. Implicitly the long hoarded pillows, vestiges of the artist’s past refer to the passage of time and the materiality of the body’s seeping flesh. These ideas are intensified by the physicality of the muffled vibrations of the sound transmitted through the pillows and the gradual awareness of the residues harboured in the crumpled linen of those who have visited the installation before. A strangely intimate and disquieting proximity revealed by the lingering scent of strangers, a stray golden hair and a damp smear on the pillow.
– Review by Mary Knights, Artlink magazine, Vol. 18, No. 2, 1998
I thought this artwork was unique as it was meant to be experienced in an intimate and uncommon way. Personally, the idea of lying in a bed in a public area is highly unsettling and instead a more private thing. Yet, the artwork’s attractive is so strong, one feels compelled to try and experience it instead of being turned off. The way this artwork manages to be so catching that people would put their unease aside was something that really amazed me.
The work is highly contemplative and the set up had fit the mood perfectly, with a strange calmness. While on a superficial level it can be seen as a bed and sound, the way the sound was emitted was truly different from the norm, making use of an interesting method of placing speakers in the pillow. I think this really gave a huge feeling of being enveloped and surrounded by the music, and the mood of the piece gives way to deeper thinking and being in a dreamland with our thoughts and emotions.
Apart from these, George Khut has many more interesting art installations
His works make use of many interesting tech, ranging from biomedical sensing technology to record brainwaves to heart rate sensors.
I think what makes his art so interesting is that he uses a lot of complicated and engineer heavy technology to create something that is so artistic and contemplative. Technology I would only expect to see in the medical field is brilliantly used as an art tool to create something that evokes deeper thinking.
‘Fish-Bird’ is an interactive autokinetic artwork that investigates the dialogical possibilities between two robots, in the form of wheelchairs, which can communicate with each other and with their audience through the modalities of movement and written text. The two robots were Fish and Bird, who were explained to visitors that they could not be together due to “technical difficulties”. They took the form of an empty wheelchair so as to evoke a feeling of absence of a person and the chairs wrote intimate letters on slips of paper that they then drop to the floor. These letters were produced with a miniature thermal printer, and had “poetic lines and personal confessions such as “my heart is broken” or “I’m so lonely,” to produce empathy in the visitors”.
The personality of the robot was portrayed through the different fonts and scripts they used, and their more “outgoing” or “reserved” movements.
For example, they faced visitors as they entered, and rolled alongside them, acknowledging their presence. Visitors that spent more time with the robots received more intimate messages from them. The two robotic chairs have interacted with over 36,000 people in Australia, Austria, Denmark, United States, and China.
Through this, Mari Velonaki, whose practice and research is within the field of social robotics, learned that people loved the creations.
She says that, on average, visitors to Fish-Bird interacted with the robots for about 10 minutes. Some of them became so deeply absorbed that they spent 30 minutes or more in the installation space. “Kids were patting them to print messages,” says Velonaki. The engagement is remarkable in the context of an art exhibition, where visitors typically only spend a few minutes before moving on.
It was cute. I loved it. I can fully understand why the audience had been so absorbed into staying and interacting with these robots. With lives on their own, the supposedly lifeless wheelchair begins to seem more animated and adorable, and perhaps they could have been seen as a pet or a young child in the eyes of the audience. It touches on the obsession of wanting to be loved perhaps, and staying with these uniquely adorable wheelchairs would have granted them more cute and intimate letters.
The narrative element of two robots being unable to be together and their well-thought-out and meaning names – Fish and Bird, definitely strikes at the hearts of audiences.
I believe that this artwork tackled the core of people’s odd and perhaps unconventional love for the pitiful, the sad, and tragedies. Oddly enough, like how some people think tearful babies and animals are extremely adorable, perhaps this hit the same zone.
Apart from this, the usage of robotics to create art was also unique and interesting and something I would love to look more into.