Sketch Multimodal

IDEA 1 – Office Space
imagine 2 co-workers trying to communicate with each other discretely 

  • Pen lifting up and down – “Call me”
  • Pen spinning in the pen holder – “10 more min to lunch!!!”
  • Pen holder vibrating – “Boss is walking over!!”

IDEA 2 – PhyDigital Dating Space
when all you can think about is whether anyone liked, started a chat, superliked etc etc you on Tinder

  • Earrings pulled down – “comment”
  • Necklace starts rotating – “like”
  • Mask gets pulled down – “chat”
  • Feeling wind – “super like”

IDEA 3 – Fighting Couple needs talking
you’re in bed but can’t go to sleep because you need to talk to the person who you fought with, who also happens to be in the same bed as you

  • 3 Red LEDs – “let’s talk”
  • 2 Red LEDs – “yes”
  • 1 Red LEDs – “no”
  • 3 Green LEDs – “sleeping?”
  • ETC

DOW Health: FreeStyle Libre

Here’s how you can be one step closer to being a Cyborg. First, have diabetes and then, get yourself a set of FreeStyle Libre.

FreeStyle Libre is designed for diabetic patients as an alternative to the traditional way for monitoring blood glucose levels. In past way, to check blood glucose level, patients had to prick their fingers with a lancet (sharp point needle) and then add a couple drops of blood onto a checking strip. This strip is then inserted into a meter, which would then give off their blood glucose level. Patients would have to do this procedure a minimum of 4 times a day (depending also on the type of diabetes) to manage levels and to reduce their risk of developing a range of diabetes-related complications.

So what FreeStyle Libre does for patients is that it eliminates the need for a fingerstick routine and with each scan, provides:

  • A real-time glucose result
  • An eight-hour historical trend
  • A directional trend arrow showing where glucose levels are headed

Freestyle Libre system consists of a small fibre which pierces the skin into the interstitial fluid, a thin layer of fluid that surrounds the cells of the tissues below your skin. It takes the glucose readings and stored in the sensor. A ‘reader’ device is passed over the sensor and the last 8 h of readings can be transferred to the reader. It is simple and discreet.

(And to sound less like a commercial:)

Think barcodes at the supermarket, but now you’re the product. A quick scan of the barcode and you get the price of the product. Similarly, a quick scan of the sensor and you get your glucose readings.

This is an image I took from their website, which you can see that besides the reader and the sensor, there’s also LibreLink, which is their app that can be used in place of the reader through NPC connection, and there’s LibreLinkUp App, which allows for your data to be shared through that online cloud system.

With that, it brings me to one of the biggest pros of this device. We all know that the world is currently in the midst of the coronavirus (COVID-19) outbreak, and having this system would mean that patients to continue to connect with healthcare professionals remotely. For diabetic patients, having their healthcare team be up-to-date with their progress means being able to strategise effectively and eventually shortening the time to achieving key clinical targets. So, by not having the virus outbreak break be a hindrance to patients recovery journey makes this device a real advantage.

Another pro point to this set-up would be the convenience that it brings. As compared to a simple scan on the phone/reader, I would believe the traditional finger pricking method to be a total nightmare- can you imagine the process of having to find a clean spot, wash your hands, sterilise the lancet, force out blood etc that many times a day? And not to mention, this process feels so intrusive! If it was me, I am not sure how much discipline it is going to take for me to keep up with it everyday. There bound to be patients out there who finds it hard to sustain this routine as well and in turn deteriorating their condition; so, having the option of eliminating the whole hassle seems like a huge plus for me!

One point of improvement could be an implementation of live feedback when glucose levels fluctuate drastically. It could simply be a visual indicator or a hepatic vibration, I think this would help patients be notified of their symptoms especially when they are distracted and can then take immediate action before situation gets worst. Such implementation would improve the device to a continuous mentoring system rather than one that only works in the flash.

I’ll be leaving out the analysis of scientific concepts and accuracy in readings because for obvious reasons that I too have no idea how to make it any better. But in conclusion, I do think the design of FreeStyle Libre is a real good example to show how much technology can improve the mundane chores of everyday life. By creating something up-to-date, something revolutionary, not only makes things so much more efficient but I assume it would also encourage patients become more engaged in their care and recovery process. FreeStyle Libre’s idea of a quick scan is undeniably life changing.

Sketch: LED Room

Final Video Presentation:

I created the video story with idea 2,3,5 and 6.

Idea 2 – used in 2 instances, the first scene when I picked up my phone and the second scene when I was hiding my phone from my brother. (Privacy)

Idea 3 – used in the last scene when my mum keeps nagging at me to eat and I just cant watch my show in peace. (Warning sign to others)

Idea 5 – used when I was going to start watching my Netflix show. (Ambient Lighting)

Idea 6 – used when I was replying to a text message. (Signal busyness?)

The main goal I set for myself when I was using prototyping using ZIGSIM and OSC Communications was to get the various feedback seamlessly, all in one run.

First problem I encountered was if I wanted to use 2 different sensors, on Wekinator, I would not have the full set of input readings. More explained in this video: https://youtu.be/8VaB7EYs04k
Solution after consultation was to have more than 1 port I am transmitting the data to, so each Wekinator file would just feel from 1 sensor and not have their readings messed up.

Second problem was by combining the various ideas together, I would have to note that some of my sensors don’t work well together. This was a big problem.

If you take a look at my FIRST DRAFT for my storyline, there was so many sensors that couldn’t be applied to purpose I wanted it for. First example would be using compass to cue ambient lighting. When I wanted to use compass at first, it was very specific, from 90 degrees to 180 degrees, in my head it should work every time. BUT unaware (/dumb) me forgot that the Earth’s North is constantly changing so there were many times I find myself coming back to my code that was once working, no longer working anymore. So I decided to use initially wanted to use acceleration but this might overlap with picking up my phone that could be read as a gesture if I was too aggressive with rotating it or if I didn’t have a reliable set of examples for accurate machine learning. I ended up using Quaternion for rotation sensing.

And in order for it not to infer with throwing phone (which also uses quaternion), my solution was to combine sensor readings in Processing itself instead of Wekinator. So I had to also incorporate 3D touch to trigger flashes to happen but the hue of reds are taken by the the x coordinates from Quaternion.

In the end, I chose to present my main prototype as a projection because with the LED Strip, I couldn’t achieve certain lighting as such black lighting (for privacy) or various tint of red was not obvious when theres many pixel going off at one time.

All in all, there was A LOT of trial and error but I genuinely enjoyed learning ZIGSIM and Wekinator. No doubt there are some ideas that I still have a hard time executing such as Idea 1 but I still see so much potential in this application. Very intriguing to know that my phone could essentially be my modern day mood ring!

 

DOW IoT: Descriptive Camera

http://mattrichardson.com/Descriptive-Camera/

In this post, I will be talking about Descriptive Camera created by Matt Richardson in 2012.

Have you ever gone into an expensive restaurant, open up the menu and instead of seeing beautiful pictures of exquisite food, all you are getting is a bunch of fancy culinary terms stringed up together, and somehow they expect you to know what to order?

In my opinion, the Descriptive Camera reflects that. After the shutter is being pressed, you would expect to see the photo taken, yet instead- all you are getting is black and white text. And then it’s up to your interpretation skills to decipher what was taken- just like how you would have to imagine what your food was going to look like.

In my own (and simplified) words, this was how the Descriptive Camera was built. We have a USB Webcam, a thermal printer, 3 LEDs, 1 button(acting as a shutter button), a BeagleBone (microcontroller) connected together via a series of Python scripts. When the shutter button is pressed, it would trigger the webcam to take a photo. The photo will then be sent via the Internet to a platform where there are people waiting complete tasks. This platform is called the Amazon Mechanical Turk, and it’s almost like a 24/7 workforce. The photo is sent together with a task, which is to describe what is in the photo. Someone out that, would do exactly that and send a description back, and this will be received once again by the Internet. This output would then be translated as a print via the thermal printer.

To all my visually-inclined readers who have no idea what they just read, here’s visual aid:

These days, when a photo is taken, there’s a lot of data that comes with it, in terms of date and time of photo taken, where was it taken, camera settings of how it was taken but not so often the contents of the photo such as what they’re doing, their environment or certain adjectives to the photo. The Descriptive Camera was created with this in mind and wanted something of the latter. Richardson also believe this could be incredibly useful in being able to search, filter, and cross-reference photo collections.

These are my thoughts-
Considering that this was created in 2012, a good 8 years ago, I think he really foreshadowed what we actually see today. On our Iphones, Apple automatically uses machine learning to identify repeated faces in our Photos app and collects them into the People album. Here’s an example of what I see in my gallery:

So, this camera, though doesn’t do it, speaks of a concept that is very relevant and practical.

I also appreciated the fact that Richardson decided to build a camera form instead of just using a smart phone built-in camera. The phone itself contains a truckload of data and with his intention of streamlining data, using a phone camera could compromise just that. And though this might sound shallow but having the tactility of a physical camera is just fun! Personally, the sensation of clicking cameras is very… satisfying.

The description receipt that comes at the end also feels sort of like a reward, also seem to reassemble a Polaroid camera. The novelty of having a physical print, away from the boring digital pixel compound, excites me!

On the flip side, Richardson did mention that each print is cost money, because you do need to pay the person who is transcribing the image, and especially at an instance. There is also an apparent lag time of 3-6minutes.

I think with technology these days, if there was a chance to reconfigure the system, we could definitely look at the method of face recognition using OpenCV, Python and Deep Learning, which seems to gain traction only a couple years back. And accompanying this should be object recognition AI, which is an even more recent technology. Overall, I believe this would eliminate the need for any humans at all and a shorter time before feedback.

In conclusion, the Descriptive Camera is a paradox in many ways, you expect photos but you get text, it has a touch of rustic-ness but speaks of an idea so pertinent to the present. All in all, this work got me thinking and I genuinely enjoyed learning more about it.