Interactive Devices – Final Project Research 1: Brainstorming

For Interactive Devices Project, I Intended to do it as a solo project so that I could possibly learn more through the exploration process.

After some research, I’ve got 3 general ideas in mind which I might consider:

Wearable Devices:

  • Could be for Fashion

    – Something that look nice

  • Could be for Entertainment

    – Something that is amusing to the user/viewer

  • Could be for Practical uses

    – Something like an Air-condition system

  • Could be for medical purposes

Musical Devices:

  • Could be a totally new instrument that is not available in the market.

  • Could be a system which plays existing instruments:

    – Strum Guitar
    – Blow Pipes/Flute
    – Ring Bells

  • Incorporate the pendulum

    – Pendulum is a interesting object which produces beats
    – is rather amusing

  • Reads Music sheet and play it automatically

    – Could be like a music box

Magnetic Devices:

  • Magnets have really interesting properties
  • Could be a combination of permanent magnets and Electrical magnets.
  • Magnetic could result in levitation of objects which are really amusing

Conclusion

Overall, out of the 3 main categories of Wearable, Music Instrument and Magnetic devices, I like the idea of making an musical instrument more and my initial idea is a device which could read music sheets like a music box. I shall explore more on music box in the next research post.

OBS Livestream Documentation

I Am Unable to embed the video here as I posted it directly to the NTU OSS Facebook Group, Hence.. >> CLICK HERE FOR THE VIDEO IN NEW TAB<<

Initially I wanted to tackle on the idea of multi-tasking by having doing many task simultaneously while on live, but after watching BOLD3RRR by Jon Cates, It gave me an idea of having multiple prerecorded videos in the live stream and produce a piece that is slightly chaotic due to the amount of events happening on the screen of the viewer and with sounds from the recordings disrupting my live speech. Originally I wanted to have multiple cameras, one filming me from the front, others will film from other angles like from the back of my head, side view and such, however I’m unable to find my additional webcams I had. So I resorted to having a share screen from my phone whereas my phone is usually unlocked and in the “auto play mode” of some sort of game and recently it’s Pokemon Go so I wanted to show it as part of my life.

Before the stream

I’ve recorded 3 grid of 15 minutes each on different days but I wore the same shirt to give a illusion that it all happened at the same time. Since I’m using my desktop which does not have a webcam installed(my laptop keep crashing), I stuck the USB camera onto a small tripod and placed it in front of me.

In my 10 minute live stream

As I said, I will talk more after the first class assignment,

Note to self: I should talk more in the live video.
-Zi Feng, 21 Aug 2017.

Although I am really bad in English and don’t enunciate words properly, I tried to narrate like living in a third space and interacting with the recordings which I dragged myself in the third space to describe what’s happening on the screen, I remembered that during our Adobe Connect lesson that when the camera is flipped, I had a hard time coordinating my movement so I flipped the live camera(only the live is flipped) to make it a mirror. Also, I am not sure if there audience but went ahead and asked if the audience could hear me, Makoto replied but I only saw the reply about 5 minutes later. (Should check comments more frequently, now I know)
“Live” me flying over to point at “third space” me

The reason that I used this Youtube video from TrainerTips was because it was his first live stream of the series and I was watching and recording it while he was on live. The basic idea is to use a Live streaming video streaming me doing things Live in my Live stream, its like a Live-Streamception.

And then something happened as mentioned by Makoto when he was an audience during the live stream.The top right(pre-recorded) and bottom right(live) goes to the desktop at around the same time and you can see the changes that happened on my desktop within that few days.

As mentioned by Alvin’s comment in my previous post, Gesture is an important factor in communication so I tried to incorporate hand movements into the live stream when I was explaining about something. I was talking about how Korean manga is usually in a long strip format but the website which I read it(MangaFox) cut it into a page format.

Throughout the 10 minute of live, I am kind of lost and got mind blank multiple times, guess this is the cons of going live but also the beauty of it- the imperfections in realtime, I also asked the audience if there is anything they could recommend me to do and I think there were no audience during that point of time so I ended the stream soon after.

Lastly, I think that the future live streams will be posted to my timeline and then shared to the OSS Facebook group because posting directly to the OSS group prevented me from showing the video in this post as the OSS Facebook group is a closed group.

Special Thanks to Makoto and CherSee as they were my live audience and reacted to me during the live stream.Thank you!!! =D

 

Device of the week 1 – Pokemon Go Plus

Pokemon go plus is a small bluetooth device that allows the user to play the mobile game “Pokemon Go” even when not looking at the phone or while the phone is in the locked screen, it will notify the user about the events in the game in realtime like the appearance of Pokemon which the player could catch or the Pokestop where the player could get items from just by clicking the button when the LED flashes and the pokemon caught or item received will be added to the player’s inventory instantly.

Since I own one of these devices, I thought that I could open it up and study the component and how does it work.

Pokemon Go Plus in Bracelet form, it could be changed to a clip form.

It runs on CR2032 coin battery which is a 3V, 210mAh and could run for weeks before the battery dies.

when fully opened, the components are really simple.

Thats the basic component that does all the function it needs to, Flash in RGB, Vibrate, press the button and connect to the phone.

While researching on this device, it showed that the Pokemon Go Plus could be built by using Arduino, however the hardest part was the encryption to the Pokemon Go game as it is really complicated and every user have their own encryption, more info on the project Pokemon Go Plus Reverse Engineering and an ongoing tutorial built on the reverse engineering link – Pokemon Go Plus DIY.

Also, I’ve done modification to my Pokemon Go Plus Device where I remove the circuit to the Vibration Motor and linked the connection to the push button, this way, every time when the motor is supposed to vibrate, it will send the signal to the push button, removing the needs for me to press the button physically and hence having the “Auto Catch and Auto collect Item” mode. As long as I am connected to my phone and having the Pokemon go App running in the background, I will have a chance to catch everything in my path even without touching the device or game at all.

Overall, Pokemon Go Plus is a very simple device which does what it was supposed to do and is a great add-on for the Pokemon Go Players which save a lot of the phone’s battery as the app only need to be ran in the background. I think this idea of producing a special device that links to an app in the phone could be possible for FYP and although it may seems easy, the connection and encryption between the app and the device might be really difficult.

 

 

BOLD3RRR Reflection: GLIXCH#^D

Bold3RRR by Jon Cates was a magnificent piece that explored the eccentric way of glitches and recursivities, I found this piece really amazing due to the fact that it was a live streaming and the effect were rendered in real time.

Bold3RRR seemed to me like Jon Cates had planned to give the viewers a sense of cognitive dissonance by establishing it in a non-linear way whereas the message was cut up into parts and overlapped in the later part of the live stream. Jon Cates used our sense against ourselves to trigger a perception of chaos by employing our two main senses – auditory and visually to oppose each other by having them interjecting in a juxtapositional manner which produces a discord in our perception of Bold3RRR.

Auditory, Jon Cates uses noises which we usually associated with glitches (static noise, clicking sound, distortion of speech etc.) and having parts of the dialogue being cut and looped on top of the speech along with the bombardment of sound recordings, phone ringing, QQ messenger notification and “glitch sound” in the later part of the video to intercept the clear dialogue and make it harder for us to understand what Jon Cates were saying.

Visually, there are 3 variety of scene in Bold3RRR,
1) A clean frontal video of Jon Cates in full screen,
2) Typographical rendering of the script and project title of the programs being superimposed to the screen capture of the program/google maps
3) A Glitch scape

Audio and visual coordinated really well but only synchronized on rare occasion where the speech matches the frontal video like at the start of the video and only happened a few times throughout. Other than that, we could hardly acknowledge any connection between our senses when we were bombarded with multiple dialogue of the same tone, “glitch sound”, phone ringing, messenger notification, images and text flashing on the display. Jon Crates knew that he was going to work very much on the recursivities and glitch effect right at the start of the video where he said

“I want to reflect, on real time… I want to reflect on real time renderings. I want to reflect on real time renderings”

This got me thinking after watching Bold3RRR more than twenty times, why did he said that three time? It was not a replay of the same audio clip three time, it was him, saying it three time. Then I got the conclusion that this was the motivation as well as the goals which Jon Cates have for Bold3RRR, Jon Cates was playful towards the technology and saw the human quality in real time glitch and wanted to reflect it into Bold3RRR by the means of using, overlaps, recursivities, distortion of speech and sensory overload which impart a sense of wonder and surprise in a form of “Dirty new media” which is now known as glitch art.

Glitch happened when we have imperfections in the program and it produced results that were unexpected by the programmer. While there are many people who would prevent glitch as much as possible, artist like Jon Cates embraced the parts where these imperfections occurs and explored into these glitches which generated unpredictable artwork by inputting digital information like audio or visual sample into it which seemed to be the underlying “biological” character of our digital world otherwise seem to be the least organic commodity around us. By putting the inputs into a glitchy system, it seemed like the glitch had gave birth to a whole new entity which the offspring is disparate than the parent (the input and the glitchy system). Glitch is also like a crack in a system that will generate unforeseen outcome which represents the imperfections in the system and unpredictability in human nature and Bold3RRR reflected on the idea that there is a human quality and organic character in our digital new media through real time rendering show how human always lived their life- that there is no turning back in time, once it was on live stream, one mistake and the whole audience will see it and there will be no retrieving of the mistake.

Overall, I was inspired to use pre-recordings in the production of live video to produce result more than just what a live video could ever do, Jon Cates had multiple of source file readied before the live skype stream and pumped them into a glitchy system which mashed them up together in the live performance to form Bold3RRR the way it is – eccentric yet captivating.

Typebot : Lim Su Hwee & Ong Zi Feng

This was a great project for us to learn about how to program a servo to move, basically for this project, we had some failure as but the end result works far better than what we expected. Our first laser cut was was too long and we re-cut them together with the rest of the parts.

And then came our Ver1 Typebot that is almost made from MDF with the finger that we knew it was too long(the marking was the length we needed)

the base of the rotation serco were too big and “wrist” were too short so we couldn’t get it to revolve 180 degree due to the size error and we went into making the Version 2.

This was made into our final Typebot as the length were just nice to press the nearest(Spacebar) and the furthest button (Escape), which was what we needed as we wanted to make the Typebot to possibly type all the letters.

The base of the Typebot was sticky tacked to the base of the keyboard to fix make sure that it will not shift around from the time I calibrate each letter in the Arduino IDE to the completion of this project.

Basically our Arduino code consist of 2 “if” function which are triggered by 2 button. Button 1 will reset all the servo to the original position as we found out that when uploading the code to Arduino, servo 2 will goes all the way to 180, so to prevent it from pushing all our parts to breakage, we will reset the position before uploading so the maximum it will move is to only touch the table hence all our parts will be safe.

Button 2 will activate the code and help me in getting the values for each motor by allowing be to repeat the movement just by clicking the Button2. For the Typebot to press one letter, there will be 6 values for a smooth movement from a letter to the next – 3 values for Servo1,2,3 to travel to the position above the letter without dragging over other keys and 3 values for Servo1,2,3 to press the letter.

The rest will be just trail and error to get the values for all those letters.

Adobe connect reflection

I think that our lesson on Adobe was really fulfilling as it really enhance our understanding in the third space. I think that the fact that we could see ourselves in the screen act as a reminder that we should really act appropriately when the webcam was turned on. It was also a great way to have a lesson especially when the lecture would set ground rules that we need to respond when called, this was a really well thought way to make sure that we are paying attention at all times.

The part which I enjoy the most was the last part where we did a collaborative performance which we were supposed to join up with the person beside us to make something interesting.

I also love the part that we could chat in the chat box during the lesson which gave us a form of entertainment as we were joking in it.

Overall, this was a really fun experience for me and i am sure that we would definitely do this again over the normal physical class that we are always having(for other module too).

 

 

Research Critique: Hole in Space

Image result for ms-dos
The best computer I could ever buy in 1980 look like: MS-DOS. GIF from https://archive.org/details/msdos_shareware_fb_MDCD10.ARC

If I imagined that I am living in 1980 where the best computer I could ever purchase(if I am even rich enough) are only running on MS-DOS operating system, a period which the first computer that runs graphical user interface only appeared 3 years later(Apple Lisa), and I heard that I could talk to someone else 4800km away- face to face, real-time, I will not believe it until I went down and have the random someone from 4800km away replied to what I said.

This is how insanely amazing I think that the work Hole-In-Space was, this work by artists Kit Galloway and Sherrie Rabinowitz was so forward in the time where the audience(also became participant) were not prepared for it and since during the 3 evening of the work, there was no signs or sponsor, no explanation was given to the work, it definitely struck curiosity within the passerby thinking :”Wait, What is this? is this a screen? *stares at someone and he stares back* Hmmm.. *Wave and the person wave back* HOLLLLLYYYYY SH*TTTTTT ITS MAGIC!!!!”

This made me wanted to see how the work was back then and I found a Youtube video of the work which was really interesting in how people communicate through the work.

“Its just, umm, television like telephone, right?  …. What a revolution it’s gonna be eh?”

– Random guy from the video A Hole in Space LA-NY, 1980 — the mother of all video chats (8:40)

People were singing, playing charades, finding their family or friends.– Playing charades through the work. From the video A Hole in Space LA-NY, 1980 — the mother of all video chats (13:20)

I think that the fact that the work was placed there for only 3 days with no explanation was just right because on the second day, there was news reporting and on the third day, people were calling their family or friends and asked to “meet-up” over the Hole-in-space this escalated to a big crowd holding signs shouting and screaming, and the whole situation started to get chaotic. The Hole-in-space has the power to bring people closer psychologically(definitely not physically), family and friends saw long lost friend, strangers interacted with each other they normally wouldn’t and this was evident in the reading Welcome to the “Electronic Cafe International”  

“Los Angeles’s Korean community and black community traditionally are at odds with each others. As they met and got to know each other through the network, they wanted to visit each other!”

Welcome to the “Electronic Cafe International (Page 349)

 

That people could gain understanding through meeting in the third space where people could react to each other. Seeing each other face in real time was essential in people opening up due to the fact that they know they are protected from harm however in the reading,

“Still, a person is offended when the virtual-space hand “touches” body part that wouldn’t be touched normally “in the flesh.” In virtual space, we learn the extent to which we “own” our body image”

Welcome to the “Electronic Cafe International (Page 347)

This made me goes to deep thought that since our virtual image are just a group of pixels that formed on the screen, therefore “our” virtual image does not exactly belongs to us, but to the owner of the screen, but is that ethically correct? Just because that these pixels grouped together to look like me and I will go”This is me” while pointing on the screen, but are they really me? Human’s way of possession are too weird.

 

Back to the topic, I think that Hole-in-space was an amazing work which turns all the viewer into a participant and through it, the interaction between the first space and the third space became a part of the art as people not only talked to people 4800km away, they also talk to those who were beside them and formed a connection due to Hole in space. These participants enable Hole-in-space to transform into a platform for collaborative performance art where all viewer became the art work, without their participation, the Hole-in-space would just be a piece of work that film people walking past the street in a window.

Lastly, I think that Hole-in-space enable people to see the potential in technology which shrinks the distance between people when used appropriately. It filled people with hopes in the future’s technology.– Contented Participant. From the video A Hole in Space LA-NY, 1980 — the mother of all video chats (26:46)