Final Sem Project: BALLance

 

 

The goal of BALLance is to balance a ball for as long as possible without having to physically touch it. By redefining the idea of physical toys, BALLance is made to exemplify the idea of telepresence. It gives participants a façade of being present at a place other than their true location. Being able to play BALLance remotely, this system renders distance meaningless.

 

General Flow of setup:

Leap motion takes the:
– angle of HandPitch which translates to the front and back movement of Servo motor 1
– x position of the middle finger which translates to the left and right movement of Servo motor 2

In order to release a ball:
– When a pinch is detected, Servo motor 3 is activated.

Tracking High Score:
– When an orange colour pixel is detected, the timer starts. When an orange pixel is no longer detected, the timer stops.

 

Process:

Structure

From where I left off, the low fidelity prototype, here are the updates to the structure:

The main issue with this structure is that I totally forgot the body of the motor needs to be mounted on the structure so that only the rod turns. Thats why in the photo it is held on by rubber bands. After dealing with the mounting, the corners were in the way and needed to be shaved.

Also the snout of the motor was SO tiny the rod barely held on so I needed to find another way to attach it.

Changes made:

The other motor for front and back.

Calibrating the board

From the video above, the next alteration from advice of LP is to add a cloth and thats what I eventually settled with. Works great!

 

Highscore

Basically works on: looking for a pixel closest/is orange and starts timer when that pixel is found aka ball rolls in. Timer starts. Timer stops when pixel is gone aka when ball rolls away. Compare this time to previous high score time. If it is longer, becomes new high score.

Initially had some problems with setting up a camera, but problem solved using Processing 4. Also, managed to set up DSLR with Camera Live and Camtwist through Syphon to set up a virtual webcam.

Final thoughts:
This project was so SO much trial and error. Because there was not one fixed result I was looking for, sometimes I was not too sure if the ball is rolling off because the ramp is at the wrong angle? Or was it the board is small? Or simply because I was lousy at the game and lack hand eye coordination. I had to try many various calibrations, keep picking up balls to try my best in finding a good in between. I also had to brainstorm a way to get the ball onto the device without having to physically touch it, since I felt that could vary the gameplay depending on how the ball was placed onto the device.

Overall, if given more time, having the feedback from user testing, I believe if I could some how calibrate the board toward the Z-axis, this game could be a-lot more convincing.

Speculative Sketch: Save space camera

So if you dont already know, data is a trendy and essential thing. But where does all these data go? Sure you have heard of this saying, “lets save it to a cloud drive, we dont literally mean a CLOUD drive.”. In fact, there are actually stored in data centers on many servers, sometimes also known as server farms.

With this in mind, I would like yo bring your attention to this quote from Straits Time.

Scientists have predicted that, at this rate, the world will run out of data storage capacity in 181 years’ time – even if every atom on earth were used to store data.

https://www.straitstimes.com/singapore/world-faces-data-storage-crunch-ahead

OH NO we’ re in troubleeee.

DOW IoT: Descriptive Camera

I will be expounding on Descriptive Camera from Matt Richardson to address this issue. What the Descriptive Camera does now is that it describes what is seen in the photo. But in my ~genuine~ opinion, with internet feeding us images and information all the time, I honestly do not see why I need a camera to tell me what an item is unless the item is hella INTERESTING.

So this is brings me to my (crazy) idea: A camera that you got to log into with ur social media accounts and whenever you take a picture with this camera, it will tap into the algorithm thingy that social media uses to keep you addicted to your phone by suggesting what you might like based on what you’ve seen. But this works in reserve, instead it filters everything in that photo you have seen too many times and is no longer interesting. This will leave you with just the interesting portion of the captures.

In my head, this means if a particular genre of photo has been taking too many times by the whole world, to the point that you can search online for it and find something similar, the photo will practically not be taken because its a black photo. Eventually deleted, in turning SAVING SPACE.

Sure it will also be a vicious cycle to the rest of the internet that revolves around images. Now even IG can reduce its space storage.

Overall, this is my primary thought. I think this camera can also speak for many other issues. I could see that if this camera gets employ, it could create a rat race to see who can upload whatever first- looking negatively, this could be talking about society’s obsession with trends; positively, we could say this is pushing for innovation. Who really knows?

 

 

 

 

 

27 Oct Update

just a heads up im stuck

Right now, the goal is to have a camera to detect if the orange ball is still on the plane- if yes, timer runs/ if not, timer stops. 

I have tried a few methods to achieve this but every one of it doesn’t work.

  • Processing + Macbook Facetime HD Camera
    Used the ‘Video’ library of Processing
    Problem as per screenshot:
    Unfortunately, I am on a macOS Catalina ver 10.15.5https://github.com/processing/processing-video/issues/134
    Tried the solution as suggested by ihaveaccount on this above forum but the string doesn’t return as they said it wouldwithout the video library, this eliminates the use of external webcams also (i tried with my DSLR as a webcam)
  • Processing + IP Camera
    Since I was hoping for the webcam to be portable anyway, I tried to use an IP Camera. 
    Problem: On my processing sketch, nothing shows up, no error, no video feed show- nothing. IP camera doesn’t show it is being connected to anything either.

    Not sure if its because im accessing processing via a Macbook and im using an android IP Camera app?
  • TouchDesigner + IP Camera
    Used the Video Stream in, and with RSTP network protocol, while yes there is a video feed, it is SOOO choppy and pretty much only refreshes when I “pulse” the node
  • TouchDesigner + Web Camera
    https://forum.derivative.ca/t/video-device-in-webcam-doesnt-work/12208/13
    Once again seems like everyone else with Mac ver after 10.14, webcam doesnt seem to work
    I did try with the latest beta ver of Touchdesigner however, it only works when I first start a timeline, once I save the file and close it and then reopen, its just a black screen again.

So now Im stuck. Im thinking maybe a Raspberry PI(since it runs Windows) with a webcam, or a camera module and maybe something along the line of python w OpenCV library, or it will be great if i could have processing on Raspberry PI. But this is a whole new set of problems because first I don’t have a Raspberry PI and just learning to work it is like 😵🤯😩.

Low-fidelity prototype

 

  • Front Back tracking
  • Had to lift my whole hand to change the angle of the servo
  • Couldn’t have my wrist in place and just adjust my hand
  • After inserting both motors to track both front-and-back and also 360 (in which this case still 180 because using servo), the motors are going CRAZZYYY until I use just 1 finger. Likely because its taking coordinates of all 5 fingers. But, still an issue was even though just 1 finger was stuck out, the leap motion is still trying to its best to get readings of all fingers.

  • Kinda works
  • Looks very rigid and tried using a board and a bead to simulate the game for now,
    realised UX might be abit confusing. because the bead starts off being still on the board so no point in moving your hand if the goal of the game is to balance the ball on the plane for as long as possible

After consultation:

  • DC motor might not be the most practical because cant control angle of rotation, only controls the count of rotation- in which needs sensors to send feedback also
  • Build a more stable structure, have the motors screwed on!
  • Adjust the code such that if the movement is too small, the values wont be sent over to arduino- to clean up the jerky-ness
  • Webcam to detect where the ball is , to start and stop timer

Final Pitch: BALLance

Previous pitch: https://oss.adm.ntu.edu.sg/ho0011an/final-project-pitch-build-me-up/
Short recap Basically, I was too fixated on the idea of using weighing scale as the input just because I wanted that specific aspect to tie the project to the theme ‘Distant Body‘. But, overall, the initial pitch didn’t really make sense and I was left off with the question- what more can you do with ‘reflection’? & advised to do experiments first.

So- that’s what I did.

Using TouchDesigner, I tried to move a red line to where there is motion.

Trying to replicate Daniel Rozin’s trash mirror but on a digital screen. This led to a couple of random ideas.

Brief explanations on some ideas
Idea 2 Thinking along the lines of trying to create a tangible user interface perhaps where the physical ball is able to move according to the projected graphics or vice versa where the projected graphics changes according to the placement of the ball.
Idea 3 Maybe a screen based device and user can manipulate the graphics in ‘3D’ space?
Could be in a form of 2 screens or maybe 1 screen but with acrylic to give a hologram look

What Harvey Moon said: “you don't need disguise or unreal or stype to do the virtual production stuff. you can do it…

Posted by TouchDesigner on Saturday, 26 September 2020

Idea 4 Individual modules will move up to replicate the shapes of what is detect on the webcam. EG pin needle toy

With all these ideas, I was further challenged to do something with the ball. To perhaps make it into a game.

So after deliberating more, my final sketch idea is an expansion of Idea 4.

It going to be a game and the only goal is: control the plane, remotely with your palm, such that the ball stays on the plane for as long as possible. Time starts when the hand is detected and stops once the ball drops off the plane. Longest time = Highest on the score board.

Flow:

Software:
TouchDesigner
Arduino IDE

Hardware:
Leapmotion
L298n driver + Dual/Double Shaft DC Motor + Single Shaft DC Motor
Arduino UNO

Week breakdown of tasks:

Sketch MultiModal: Online Dating Accessories

 

Initial Sketches

Went with Idea 2
Concept as explained in video

Initially, I figured in my head that I would need to 2 arduinos (1 for the user- stagnant and 1 for the potential love- portable). But it was only half-way through building the project that I realised even if I was to have 2 arduinos, both of them still need to be bounded together (ref to bottom) so there’s no difference in UX even if just used 1 arduino. Which was when my project reduced quite significantly and became hardware heavy instead- led to my problems stemming from hardware management.

First lesson of the day, don’t be rough. In attempt to put the “necklace” over my head, I tore the vibration motor. Trying to put the devices around me also meant I couldn’t conceal a huge bulk of wire and I needed to make a lot of “long” wire to be able to reach from the controller to the user’s neck. Mess.

Decided to make an accessory box instead.

Takeaways:

  • Bringing it forward, an aspect i can look into is, after realising that something doesn’t go according to plan, is to be flexible enough to trial it as something else. In this case, the necklace into maybe perhaps a belt. Working with what I have and improvising along the way.
  • There needs to be a way for the user to know what the different output means. So say if someone likes your photo, the ring would light up but when the user sees it- what is going to hint them to make that association? (something very simple i can think of is maybe-literally a heart pendant/gem that likes up for a ‘like’ or since the ‘super like’ icon is a star- maybe a motor could move in the direction of how you would draw a star)
  • Hardware UI to screen based UI

DOW Multimodal: Ball

Images taken from https://www.behance.net/gallery/60769739/Ball

Ball is a smart fire extinguisher. Ball was created with the blind in mind. The whole product comes with a sensor, the spray and a refill capsule.

Here’s how it works:

In case of fire, the sensors placed around the house would detect heat and smoke.

It will alert the user via the speaker on the fire extinguisher. It will also guide the user on how to use it.

When the fire extinguisher is activated, the nozzle is able to rotate and automatically find the source of fire using the heat detection camera.

So essentially:

The very obvious advantage of this product is how is it able to make up for the user’s visual capabilities. Through audio and heat detection, Ball is still able to give the user control but at the same time aids the user with this control they have. It could very well have been a vibration/smell output instead of audio, they would all be able to alert the user anyway. But in the context of a blind person who still has to get to the fire extinguisher itself, I felt audio was the smart and better mode because blind users are usually more sensitive to auditory information and use it to locate objects faster.

The mechanics is also easy. Ball sprays with a simple pull of a loop.

Considering that this is only aimed at small initial fires, I think there should be other forms of outputs to alert the user when the fire is too intense/ too close and when they should ditch the fight against the fire. This way Ball is able to maximise the potential of the information it receives to not just become a defence mechanism but also a guide to get out of the situation safely.

I am also not too sure what is going to happen if the fire is behind the user.

In conclusion, I chose to look at this device from a multimodal point of view because I believe it has potential to grow, to collect more data and transmitting them into other various forms of output. Maybe it’s able to somehow tap into the user’s mental perception of the space using audio beacons? Maybe it’s able to give hepatic feedback based on intensity of fire? Nonetheless, I can deny this is an innovative piece of assistive tech, tapping into both “low” and “high-tech”.

 

Sketch Multimodal

IDEA 1 – Office Space
imagine 2 co-workers trying to communicate with each other discretely 

  • Pen lifting up and down – “Call me”
  • Pen spinning in the pen holder – “10 more min to lunch!!!”
  • Pen holder vibrating – “Boss is walking over!!”

IDEA 2 – PhyDigital Dating Space
when all you can think about is whether anyone liked, started a chat, superliked etc etc you on Tinder

  • Earrings pulled down – “comment”
  • Necklace starts rotating – “like”
  • Mask gets pulled down – “chat”
  • Feeling wind – “super like”

IDEA 3 – Fighting Couple needs talking
you’re in bed but can’t go to sleep because you need to talk to the person who you fought with, who also happens to be in the same bed as you

  • 3 Red LEDs – “let’s talk”
  • 2 Red LEDs – “yes”
  • 1 Red LEDs – “no”
  • 3 Green LEDs – “sleeping?”
  • ETC

DOW Health: FreeStyle Libre

Here’s how you can be one step closer to being a Cyborg. First, have diabetes and then, get yourself a set of FreeStyle Libre.

FreeStyle Libre is designed for diabetic patients as an alternative to the traditional way for monitoring blood glucose levels. In past way, to check blood glucose level, patients had to prick their fingers with a lancet (sharp point needle) and then add a couple drops of blood onto a checking strip. This strip is then inserted into a meter, which would then give off their blood glucose level. Patients would have to do this procedure a minimum of 4 times a day (depending also on the type of diabetes) to manage levels and to reduce their risk of developing a range of diabetes-related complications.

So what FreeStyle Libre does for patients is that it eliminates the need for a fingerstick routine and with each scan, provides:

  • A real-time glucose result
  • An eight-hour historical trend
  • A directional trend arrow showing where glucose levels are headed

Freestyle Libre system consists of a small fibre which pierces the skin into the interstitial fluid, a thin layer of fluid that surrounds the cells of the tissues below your skin. It takes the glucose readings and stored in the sensor. A ‘reader’ device is passed over the sensor and the last 8 h of readings can be transferred to the reader. It is simple and discreet.

(And to sound less like a commercial:)

Think barcodes at the supermarket, but now you’re the product. A quick scan of the barcode and you get the price of the product. Similarly, a quick scan of the sensor and you get your glucose readings.

This is an image I took from their website, which you can see that besides the reader and the sensor, there’s also LibreLink, which is their app that can be used in place of the reader through NPC connection, and there’s LibreLinkUp App, which allows for your data to be shared through that online cloud system.

With that, it brings me to one of the biggest pros of this device. We all know that the world is currently in the midst of the coronavirus (COVID-19) outbreak, and having this system would mean that patients to continue to connect with healthcare professionals remotely. For diabetic patients, having their healthcare team be up-to-date with their progress means being able to strategise effectively and eventually shortening the time to achieving key clinical targets. So, by not having the virus outbreak break be a hindrance to patients recovery journey makes this device a real advantage.

Another pro point to this set-up would be the convenience that it brings. As compared to a simple scan on the phone/reader, I would believe the traditional finger pricking method to be a total nightmare- can you imagine the process of having to find a clean spot, wash your hands, sterilise the lancet, force out blood etc that many times a day? And not to mention, this process feels so intrusive! If it was me, I am not sure how much discipline it is going to take for me to keep up with it everyday. There bound to be patients out there who finds it hard to sustain this routine as well and in turn deteriorating their condition; so, having the option of eliminating the whole hassle seems like a huge plus for me!

One point of improvement could be an implementation of live feedback when glucose levels fluctuate drastically. It could simply be a visual indicator or a hepatic vibration, I think this would help patients be notified of their symptoms especially when they are distracted and can then take immediate action before situation gets worst. Such implementation would improve the device to a continuous mentoring system rather than one that only works in the flash.

I’ll be leaving out the analysis of scientific concepts and accuracy in readings because for obvious reasons that I too have no idea how to make it any better. But in conclusion, I do think the design of FreeStyle Libre is a real good example to show how much technology can improve the mundane chores of everyday life. By creating something up-to-date, something revolutionary, not only makes things so much more efficient but I assume it would also encourage patients become more engaged in their care and recovery process. FreeStyle Libre’s idea of a quick scan is undeniably life changing.