FYP Process 8: Explorer 27 Growing Up.

Few weeks ago, Explorer-27 was a short, dog-sized robot:

and after weeks of further development..

 

fast prototype using sticky plasty to form a tablet holder.

alot of time spent to cut the aluminium profile and attach them securely with nyloc nut and spring washers.

Afterwhich, I also built the new paper model dog head(Husky) 

And installed the LED eyes into this new paper Husky head.

Re-code the UI in Unity and also connect it to the Arduino system.

After the program in Unity works, I exported it into a working file and installed into the windows tablet to run the system program for the robot. The head is also attached to the new robot body.

Head with the UI animation.

As you can see, when the user moves the robot, the eye will follow the direction, this was done by transmitting data from one Arduino to the other using I2C protocol which connects multiple arduino together and then to the Unity program in the windows tablet.

 

 

FYP Presentation 1 OURS, Explorer 27 the lost robot

SLIDES and VIDEO compilation

The Slides were made with multiple programs,

Adobe After Effect for the animations(I got my animation template online and edited it.)
Adobe Premiere Pro for converting the animation into GIF
Powerpoint slides for the slides.

Why use GIF for animation?  Because a video in slides is not loopable and have too much constrains, so if I separate one animation into 2 GIF (Appearing and Looping) and use the “Animation- Appear” function in Powerpoint in a well-timed manner, I could create an illusion that it is looping.

Example:

This is an “Appearing Animation” that plays automatically because it is a GIF, and a “Appearing Animation” will keep “Appearing” because GIF is in a loop, so when stacked with “Looping Animation”…..

This is the “Looping Animation” that automatically appears (With PPT’s Animation”Appear”) 3.2 seconds after the slide starts, so it will look like the animation flow somewhat nicely.

and when put together in PPT, it will look like this, this GIF have the “Looping Animation” played 8 times, after 8 times it will “Appear” again but just an example~ in the ppt, it will loop indefinitely until I click to the next slide.

That was how the slide was made! and I learn After effect for it because most of the animation i used in this slide could be applied in the UI of the robot (Remember i have a screen that needs User Interface, yes, thats the one.)

Also, The Lab coat i wore during the presentation was customized with the logo of OURS to have a more cohesive feel to the theme.

 

What had i learn in the presentation:

Antropomorphism in robotics

main issue in the fyp:
Fake Ai -> Belivability -> Lost -> Antropomorphism

LEM Solaris (movie and book, george clooney) , I robot russian Stalker

FYP part 5- Hephaestus Systems Planning + Software learning + Modelling + Gantt Chart

To understand what to plan for, I would need to understand the nature of the project. As the task for completing a physical + mechanical project differed far from a virtual-screen-based project like games and visuals. There will be more restrictions in doing a physical project than virtual one due to the nature law of physic, material and cost.

Money Problems:

As for building a few robots will cost me quite some money, budgeting will be even more important than time planning, as for where the monies come from, I will probably save up from selling things online and work studies and treat it like a commitment because I think that no one is forcing me to do anything and its all my resolution to fund my own project.
I had thought about asking for sponsorship and that may even happen if I have to. (especially for the batteries that I will be using in the robots, these little things must be of great quality due to safety issues while a good + durable + high capacity + low weight battery cost about $500 and up each and I would need at least 3(excluding spares), which I am totally unable to afford.)

During this few weeks, I had been learning Blender (3D modeling Software) from scratch, it is really difficult to pick up, but i think the potential of Blender is far beyond what I need, so I will stick to learning this super useful program.

I’ve tried to follow a few tutorial and learn the basic of Blender from youtube, this was my first blender experience in building a 3D model.

I stated to learn by building chest as it got similar shape of what I want to produce, and after this, I used the skills i learnt from here and applied into the attempt of my R1 Robot.

the overall shape of this is rather similar to the chest so it took a while to get used to this, however after building this, I realized that I don’t know how to make the top of the robot. So I progressed into another tutorial.

and this was the over shape that I made and I am pleased with it for a first timer effort, although it took me 2 days to reach here, after this, I continued to build the details at the side and front…

Side view

Front view

and I decided to make it like a production poster so I rendered another isometric view to make it to looks legit for my presentation.

In blender, there is a animation function and I thought it would be really cool if I can learn it, so I went ahead to learn it from an online tutorial and produced this.

After I feel that this is good enough for the presentation, I tried to 3D print the model out, it was then then I realized that my model was full of mistakes and it only look good, in actual fact, the surface of the robot was really badly made. So the effort I made in this 3D model got to the furthest here, I will definitely be modeling everything again for the actual robots that I will be building for this FYP as this model doesn’t work, however this was a good learning experience for me and understand that I need to build the model’s surface properly.

the surface detail was not able to be printed due to the mistake I made during modeling, which creates an non solid surface and therefore not printable.

and then it was attached to a small remote control car for proof of concept to be used during the presentation.

now, the Crowd Favourite…..
Mr Gantt Chart!

I started the Gantt chart at 1st of April 2018 as it was all research done till this point.

Within each task, there will be multiple small task which falls into the same category and I will explain them as well as a short description of what it is about here in this post.

Since my project will be physical+mechanical+ technological+ I need to get student’s FYP work early, it is really important to start the execution really early and throughout the holiday because building of the actual robot and troubleshooting the system will take quite some time, and I have to ask all FYP student to submit their work to me really early to make everything work.

Research 1 (1st -30th of April):
I think this is the most important factor of the project, good research done here will help me to reduce work greatly in the future.
Researches up till this point (16th April) – Similar Existing Products, potential parts, platforms for interaction, things I have to learn, Inspiration of artistic works, Parts price comparison, Target market and segmentation, 

Skills Acquiring ( 10th April – 8th July):
There are many lacking knowledge and skills required for me to complete this project, like the list of things to learn, I need to learn a few of them to make sure that my system could work. Also, I need to pick-up 3D modelling skills as the knowledge I have now is insufficient, In the past few days, I’ve started to learn Blender, which is a free software for 3D modelling and is great for my project, still, time is needed to hone the skills hence the long period of time allocated to learn these skills.

Initial Purchases (25th April – 27 April):
One of the biggest way to save money is to purchase them from china, which will take weeks for the item to arrive, hence it is really important to buy research about the parts required and buy them early to use the least money for the best result. Also, initial purchases is set to 25th because I will be presenting on that day, if no major changes were made, I could only really consider what to buy after that.

R1 Prototype (Software and Hardware) (2nd May- 8th July):
This will go hand in hand with skills acquiring, as I need to have a goal of what exactly to learn, it is the best to do while learning and learn while doing. R1 is the first Robot that I will be building of the set of 3, which will be the bare-bone of the Robot basic functionality and act as a confirmation to the general systems and parts requires to build R2 and R3.

R1 Movement System Finalization (18th June – 29 June):
As movement is really difficult task to achieve while concerning about the safety of the people and booth, (It is really easy to make something move, but it is much harder to make it move while not destroying things.)
so I gave more time for me to think about how I will achieve this.

Research 2 ( 20th June – 4th July):
When I think of robots, I will think of Japan, maybe its just me since I was influenced by the robotic culture of Japan when I was young, so I will travel to Japan during this period to experience their advancement of robotics first hand. (Place which I will visit :National Museum of Emerging Science and Innovation (Miraikan)Unicorn Gundam in Odaibabot at Haneda AirportHenn-na Hotel, Robot Restaurant (not sure about this).

R1 Movement Prototype (4th July-23rd July):
Start to prototype right after I am back from Japan from experiencing their robots and hopefully get to see how they works in Japan.

R1+R2+R3 Concept Generation and Refinement( 4th July -27th July):
Since by this point I’ve already understand what parts R1 requires and already have the measurement of parts and sizes like motors and screen size, I could think about exactly how each Robots will look like as they will look different and have different functionality.

R1+R2+R3 Secondary Purchase (27th July – 29th July):
Knowing what parts each robot needs, I could finally purchase the basic parts for R2 and R3, plus the add-on function for all 3 robots(each robots have different functionality so require different parts)

3D modeling(Aesthetics) (27th July – 20th August):
This will be the final appearance for all 3 robots, 3D modelling done in blender.

R1 Prototype(Aesthetics+ Software + Hardware) (20th August – 24th Sep):
3D printing of all R1 modeled parts, fix them together and make sure the software and hardware works, if it doesn’t, edit and reprint of the parts.

R1 Prototype Trial and Testing ( 24th Sep – 1st October):
when all parts work together, test the robots and system in a location to make sure everything work as expected and fine tune.

R2 and R3 Prototype V1 (Software and Hardware) ( 1st October – 5th Nov):
Since the primary component and system of R2 and R3 is the same as the already working R1, these 2 robots will require lesser time and the main portion of this 2 robot will be in 3D printing and executing the different function in them.

User Interaction Trail and Testing ( 5th Nov – 19th Nov):
Testing and making sure that there are no major bugs in the system, touchscreen and functionality works well.

FYP Student’s Work Collection 1 (20th Nov – 1st Dec):
At this time, all 3 Robot can roughly work and I’ve already document these robots, so instead of just verbally telling them I will help them in making their FYP better, It will be more stimulating if I show them a system which already work and ask them to prepare a document for this system for their own benefit. (it will not be easy to ask people to do extra work, so I need to make sure that I sell my Idea to them really early*That’s 1 semester before the end of FYP* by making these robots cool and they will be losing out if their work is not in the systems.) Also, at this point they don’t have to send me any work and it is already the semester break so they have some time to think about what they want to prepare for the systems.

Booth System Conceptualization.( 20th Nov – 3rd Dec):
By this time, I should have the system of the robots working and I need to incorporate that into booth for our FYP, and this will probably be the time which we will know where our FYP will take place(in school or in public) and this will change how the booth system drastically, so it will be better to place this at the end of semester break.

Booth System Prototype (Software + Hardware) (3rd Dec – 14th Jan):
after the conceptualization, prototype will come next and I hope to have this done before the start of semester so that I will have the fully working prototype done and having the whole semester to polish my work, troubleshoot and bug fix.

Software and hardware Refinement (14th Jan – 1st April):
Software and Hardware refinement will take up most of the time as the real problem will usually emerge at this point of time where some shortcoming of the project will be apparent, also, there might be good suggestion/advice by people along the way and this will be the time to incorporate these wonderful suggestion into the project.

User Interaction Testing 2(1st April – 8th April):
Testing of the final system, to make sure all parts and component work as it should. if problem found, atleast there are time to replace these components.

FYP Student’s Work Collection 2( 1st April – 1st May):
The final collection of (Hopefully) all of the student’s work and adding them to the system once it’s collected. At the very least, there will be the basic information of every student which is uploaded to the FYP website.

Aesthetic Refinement( 8th April – 29th April):
The polishing and painting of the 3 robots and making of props/items for the booth. (when all software and hardware is working)

Booth Preparation & Stylization (1st May – 8th May):
Production of prints for booth, name cards/postcards and such.

FYP Show Preparation. (8th May – 10th May):
The actual preparation of the Booth and to bring the robots down to the exhibition area and set everything them up.

FYP SHOW( 10th May- 20th May):
Make sure the show runs smoothly, on-site repair if needed.

 

CATEGORY

FYP IDEA: To create a system which will benefit the FYP students and gives the guest an improved visiting experience.

researches,

component (pricing, compatibility, functionality, component sponsorship(especially for battery))

softwares (research for platform + udemy courses)

movement (sensors, moving system + hardware) (main moving calculation should be done on the booth computer and transmit over to robots due to power issue, (more computation power it runs, more power it will draw)

 

 

FYP General Direction 2: Target Segmentation + Initial Approaches.

After the previous post here and presented it in the class, Kristy asked me about what group of people I would like to target for the direction of the FYP. I had not really thought about that before it, but after a week of researches and further thinking about the constraints which I will probably face while doing the FYP, I came up with a few criteria for choosing my target user so as to segment the broad differences of people around the world.

Geographic:

Firstly, people from the developed country. Because I am currently living in Singapore which is relatively advanced, my cultural upbringing, my daily life consist mostly of what a citizen from a developed country would experience, since I do not have sufficient understanding of what I am not used to and could potentially do more harm than good in a developed country, by targeting a people from the first world meant that I could get the researches done and the feedback  I need readily.

Next, out of the developed country, my interest lies in Asia country, more specifically, Japan, South Korea, urban China, Taiwan and Singapore. As I am more inclined towards the cultural aspect of these countries, moreover, these countries are relatively cheaper if I am to go there do my researches(there’s a possibility that I might do so.)

Socio-demographic:

For the FYP, I would like to produce something which could help the life of a students, this is because students are relatively similar throughout the countries which I am interested in. Also I would like to do something that could help the student of age 21 to 28, primarily because most of my friend falls within this category, this is also the age group who started to be legally labeled as adult, although none of the student would seem themselves as one (including me).

Personality:

Mainly someone who would like new technology and with short attention span who are open to changes. This is because I would like to produce something at the end of the FYP which will be tapping into the current technological ecological system like the smart phone or computer, the short attention span of the millennials seemed like a negative aspect, but I think that there are potential to tap into these “negative” behavior and make it into the strength of my FYP, like how the fidget spinner swept the trend a year before, something that make no sense will seemed like an good product in the eyes of the beholder.

Initial Approaches :

For now, I’ve got 4 ideas that I think that it might have some potential in it.

First, it is to create something that will help students in taking notes/writing/ drawing, like a Wacom Cintiq for computer application, or specially designed for note taking like the Wacom bamboo Folio or Sony Digital Paper,  however the Sony digital paper and Cintiq were designed for professional uses and their price point is rather steep for students. The Bamboo Folio is placed in the zone for note takers and scribblers to write/draw on the paper and still have an digital copy of it at an reasonable pricing. These were wonderful technologies which helped greatly in the transition of going from the traditional paper to the digital note taking, which the millennials spent the first half of their life having notes on paper while the recent half in digital notes, while not all will prefer digital notes over physical one, it proves to be a really good tool as the cloud sharing enhanced the capabilities of these students.

 

Second idea is to create something that will help to manufacture prototype at a better standing, be it cheaper or faster. As I would like to create something like a 3D printer specially for design student who need to produce physical work throughout the course of their creativity studies, although the current 3D printers in the market generally which suits the need of almost everyone who needs it, there are still room for improvements for these wonderful machines like the ability to print in different materials at a cheaper price. How about 3D foam printer? 3D Glue printer? maybe, maybe not.

 

My third idea was the idea that I was considering for the longest time and currently is inclined towards, a human powered electricity generating machine which not generate electricity, it would generate digital currency that represents “clean energy points” which could be used in various digital function like an in game currency or the capability to exchanged into discount coupons to be used in real life shop. While this may seemed impossible that there will be any market for this, it might actually be used in the future due to a few reasons..
1) User could use this as an emergency charger to charge their smart phone, which the battery usually last only half a day and the millennials could not live without their phones.
2) The rate of obesity is rising and partly it is due to the lack of exercise, these human powered electricity generating machine could be viewed as a workout to promote better health, at least by a tiny bit. Also, Gamer rarely exercise, so if there’s a game that encouraged them to play by exercising, there’s a possibility that they might. (like what Pokemon Go did when it was first launched, Gamers are walking more than they usually would because of the game.)
3) Incentives will be given to the users and participating shops, by generating electricity through these machines, not only the user could charge their electronics, they could save a slight amount of bills and earn discount from the shop. While participating shop would gain more customer as by having their shop appearing in the app for exchanging, it serves as an advertisement to their shop. Moreover, by participating, the shop could have the positive label of “environmentally conscious”, “taking part to increase clean energy production” which it seemed like what a medium sized cooperation would do for their branding effort.

 

My last idea of this post is simplest and might be applicable to everyone rather than for the targeted audiences… I would also like to create playful machines, systems, or applications that make people smile through the interaction, if I went along with this idea, I will probably create a diverse series of artifacts(digital or physical) that is funny in hope that they will bring the happy memory along with them after the FYP show.

 

According to Neurologist David Eagleman, there’s a scientific reason why our first(few) idea isn’t usually our best one: Our brains are lazy, and the first idea we have is usually the handiest idea, rather than the best idea. I understand this fact that the ideas that seemed appealing to me now might not be the best idea, so I am still coming up with new ideas that could contest the idea I am currently having, but for now, that is all I have.

 

FYP General Direction 1

In my initial general direction, I think that there many approach to consider what kind of FYP I would like to achieve in the end… Maybe instead of thinking what I can possibly do for my FYP, I feel like it is better for me to think about this question- “What can my FYP do for me?”

GIf from https://giphy.com/gifs/thinking-SabSYEpsVh0di

By this time, I already understand what is my strength and what I would love to do, I think that I am relatively skilled in handicraft, building physical object and would love to build challenging mechanism like my Obseleting Instrument I did for Interactive Devices in year 2.

But will doing what I am good in and what I like for my FYP really help me? It might look good in my portfolio, but at the same time, it simply means that I will be staying in my comfort zone, and I am not saying that there’s nothing for me to learn by doing so, but rather, it means that I am locking myself into just doing related project and narrow down on the infinite possibilities that could be ventured into an unexplored territory that I did not previously knew that I like. In the long run, doing something which I could achieve might not be that great afterall….

If I think about FYP as a “One Year Summary Exercise” for the 3 years here to showcase what I’ve learnt in NTU, the result of that FYP will be way different than if I were to think about it as “One Year Opportunity to learn and explore”. but the question here is….. How would I want to think about FYP as? And seriously, I dont know for now…

If we think about FYP in another perspective, It is a small label that will be attached to us upon graduation, we write this label ourselves, work hard to earn this label and once it is on us, there will be no way to change it. Of course, I am not saying that FYP will be super important to all of our life, but the fact that one year of preparation just for the moment of showcase, if its good, good for us, but if its bad, forget about it, there’s still many other things in life which will be more important.Label Yourself Before They Do

So, back to the question, “What do I want the FYP to do for me?”
Now I’ve got one year to do whatever I want,(I love Interactive Media~~)
this is the time to ask… What do I really want?
Doing something beautiful and looks nice with no meaning? Definitely not my cup of tea.
Artistic pieces with deep meaning that people will ponder over it? Maybe? But I am too shallow for this.
Pure visual/ sensory inducing project?  unless one that will help the scientific/medical field.. for art purpose, not interested.
something easy?
It’s easy to list down what I don’t want but what do I really want?
I can only think of one right now….

I want to change the world at least in a minuscule way and leave some footstep on the sandy beach(not literally) before I die. I would love to have an impact on people’s life in someway, somehow.

Big dream require small steps, maybe, just maybe, this FYP could be these small steps….

I have this A5 note as a motivational quote behind the desktop screen which I wrote long time ago for myself

“Create something that I want but unavailable in the market, not something that would sell.”  – ZiFeng

Now, building on what I mentioned above –“One Year Summary Exercise” VS “One Year Opportunity to learn and explore”, now I’ll nominate a new contender – “One Year Opportunity To Make an Impact/Benefit Others” and truthfully speaking, I like the latter the best and for now, it will be what I would like to achieve for the FYP.

All the celebration GIFs you never knew you needed by MrM3on

Did I change my question from
“What I can do for my FYP?”
to
“What my FYP can do for me?”
to
“What my FYP can do for others?”?
Yes I did.
And I quite like my direction for now, unless I’ve got a better direction that I could come out with before the end of this semester, I will probably go along the path of “Impacting/benefiting people” and think of what I could do along this path. (I am already excited thinking to think of ideas)

 

 

Emergent visions: Kimchi and Chips (by Bao, Fabian and Zifeng)

LUNAR SURFACE (2014, 2015)

In collaboration with photographer Eunyoung Kim.

LUNAR SURFACE by Kimchi and Chips, In collaboration with photographer Eunyoung Kim.
Digital photo print 1500 x 1000mm, Live scanning installation [dimensions variable].
2014, 2015

A vertical flag of fabric is stroked by the wind, displaced by curves of air, swinging back and forth. The fabric is tracked by a 3D camera whilst a projector replays a response onto it according to its evolving shape. As it sweeps, it leaves a trail of light which draws a heavy fragile moon floating in space. The flag renders this moon from another reality, the silk surface acting as an intermediating manifold between reality and virtuality.

This artwork is an installation set up within the concrete chambers of Bucheon city Incinerator, a stagnant industrial processing plant decommissioned in 2010.

At locations within the building, the artists collaborated with photographer Eunyoung Kim to capture moments of the moon being birthed. Long exposure photography trades the dimension of time for a dimension of space, extruding the moon into existence on a set of photographic prints, capturing a painting enacted by the details of the wind.

2015_Lunar_Docu2_123 / 2015_Lunar_Docu2_123_w1 / 2015_Lunar_Docu2_123_w2

Inspired by the 2 moons of Haruki Murakami’s 1Q84 (a Japanese fictional book) and the flags of space travel, the artists present a portal into another existence where another moon orbits. This other place is made material by the fabric of the flag.

LIGHT BARRIER THIRD EDITION(2016)

Concave mirrors, Projection, Scanning, commissioned by the Asia Culture Center in Gwangju. The technology is enabled by Rulr, an open source graphical toolkit for calibrating spatial devices, created by Kimchi and Chips.

The installations present a semi-material mode of existence, materializing objects from light. Light Barrier Third Edition is a new installment in this series that exploits the confusion and non-conformities at the boundary between materials and non-materials, reality and illusion, and existence and absence.

The 6-minute sequence employs the motif of the circle to travel through themes of birth, death, and rebirth, helping shift the audience into the new mode of existence. The artists use the circle often in their works to evoke the fundamentals of materials and the external connection between life and death.

I think that Kimchi and Chips place a lot of effort into producing the Light Barrier and making it work, also as mentioned by Elliot Woods, when they started their project, they were not really earning a substantial amount of money and the project seemed really expensive to build as the concave mirrors and the structure needs to be customized built. This is also why the scale of the first edition of light barrier was much smaller. However as my own preferences, I like the first edition more due to the light beams were more focused as there were lesser mirrors used.

 

Halo (08 – 27 Jun 2018.)

Kimchi and Chipswill have a new outdoor installation, commissioned by Somerset House. Halo will be happening in London from 08 – 27 Jun 2018. 200 mirrors will reflect sunlight and together with mist of the fountains in the courtyard, a shape of light will be ‘drawn’ and suspended in the air. The mirrors are programmed to react to the direction of sunlight.

This kind of reminds of our Spectra light show at MBS, which make use of light projection and water sprays to create sort of holographic effects, except that it is done at night.
Kimchi and Chips really utilise technology by looking at the physics behind the machine. These states of semi-materiality created is as fascinating as any wonder of nature we could possibly encounter. And they make us immediately want to make sense and grasp the seemingly new material, only to discover it is just nothing but light.

 

Interactive Device Presentation – Unconventional Musical Instruments.

The file size of this presentation was too large to be uploaded so I’ve screen recorded the presentation and uploaded it to youtube.

I’ve chose to narrow down my scope to just Unconventional Musical Instruments mainly because it will be more interesting and fits the time limit nicely.

Robotics Presentation – Biologically Inspired Robots.

The field of Bio Inspired robots is too wide and I think that we will benefit more by understand the different ways of how we could learn from the nature rather than narrowing the scope down and go deep into one of the sub division of bio-inspired robots.

The File size for the presentation is too large so I screen recorded the presentation in presenter’s view.

 

Device of the week 4 – GoBone

GoBone is a smart bone designed for the dogs and puppies and is available(currently out of stock) in Amazon at $200 currently and had raised $180k in the Kickstarter campaign in the past.

KickStarter Campaign

Amazon

Basically, it is a interactive automatic dog entertainer that rolls around like a remote-controlled car where the dog will chase it around like a cat chasing a mice. The wheel could also store treats that the dogs could eat from it and according to the maker, apparently the user could rub peanut butter over the shell for the dogs to lick… which.. seemed like a bad idea to me because think about running peanut butter all over the house and the mess it will make.

Looking at the process and timeline for developing this device seemed really doable even for our standard. since it started it’s development in an Arduino Uno, two DC motor with gear box that is available from china at around a dollars each which I happened to have the gearbox that I purchased to test my final project out but did not use it in the end.

so the connection to the phone and developing an app for it seemed quite hard for us, but the technology in the GoBone itself seemed relatively simple, receive inputs from the phone, turn MotorA and motorB in the same direction to move forward or backwards and turn both motor in different direction to spin the dog bone.

Although it is said that GoBone was designed for the dogs, I personally feel that it is still designed for humans to be lazy. And a $200 for a “smart” dogbone is rather expensive, but I can understand that pet owners are more willing to spend on their pet and hence they make a good target audience to begin with.

 

 

 

Telepathic Stroll Final Project with SuHwee, Makoto and Bao

I really like the outcome of our Final Project very much.

This is the link for our video wall, Video Wall on Third Space Network

This is a Screen Record of the Video Wall on Third space Network.

 

And This is What I did before the wall was made in Premiere Pro to get a general feel of what it will look like and to get the exact timing to start each video in the video wall.

 

Lastly, this was my Individual Facebook broadcast, Just to re-emphasis on the point, what made our project really interesting was the video wall, every individual Broadcast doesn’t seemed like much, but when we are linked with the rest, we will produce a piece that is much more than itself.

Posted by ZiFeng Ong on Friday, 10 November 2017

we are the third space and first space performers.

The main purpose of our project planning was to begin with the end in mind – To present the four broadcast in a video wall that will be interesting for the viewer to watch, we will be the third space performers by having our broadcasts interact with each other in the third space. Our final presentation in class will be a live performance and I am not taking about the fact that everyone will watch what WAS BROADCASTED LIVE, nor all the audience in class will be watching it in THE PRESENT, but as the timing to click start each of the video must be precise to the miliseconds and will not be replicable when anyone play it again by themselves later as Telepathic Stroll will look vastly different if the time to click play each video are slightly different, so we are making a Live Performance by playing the four video in a way that we think it is for the optimum viewing sensation.

how did we perfected our timing even when not being able to see each other?

This is the reveal of our broadcasting secret! GET READY! CLEAN YOUR EARS(EYES) AND BE PREPARED! We called our system the….*Insert drumroll in your mind*

“THE MASTER CONTROLLER”

so… what is the Master Controller?

Basically, it is a well crafted sound track that is 23 minutes which consist of drum beats that worked like a metronome for syncing our actions and careful instructions to tell each performer what to do at which exact moment and every performer has a personalized sound track because we are doing different task at the same time.

How was it made?
It was made in Premiere Pro and a web based-Text to Speech reader and I screen recorded when it was reading to extract the sound from it into Premiere pro. The whole process to create the soundtracks took more than 18 hours and I’ve lost count of the time afterwards.

 

So how does it works exactly?
On the basic Idea, we will have the instructions preparing us and telling us what to do like “Next will be half face on right screen. Half face on right screen in, 3, 2, 1, DING!” and we will have our actions/change of camera view being executed on the “DING!” so that all our action will synchronize.

It started when we were at the meeting point where we started the sound track at the same time, counting down to 7 minute to walk to our respective starting point and wait for broadcasting to start, first, I will start broadcasting and then Makoto, SuHwee and then Bao on a 5 second interval, this was done so to allow us to have control when we play the four broadcast in the class to achieve maximum synchronization, if we started the broadcast at the same exact time, we could not click play on all four video in class which will result in de-synchronization. After started the Broadcast, we filmed the environment which was to remove the difference the timing from 30 seconds to 15 seconds depending on the time we started the broadcast and have everything happening afterwards at the exact same time.

Afterwards, we will have different actions at the same time, like when Bao and me entered the screen, Makoto and Suhwee will be pointing at us, this was done by giving different set of instruction in the Master Command at the same time, since each of us could only hear our own track and there will not be confusions among individual and clear instructions were given although I was confused for countless of times during the production of the Master Command because I have to make every command clear and the timing perfect, and it includes countdown on certain actions but not those repeated actions. The hardest part to produce was the Scissor-Paper-Stone part in the broad cast as everyone was doing different actions at the same time

at the end of the scissor paper stone segment, we all synchronized to the same action of the paper and Bao and I were counting on 5, so we all were showing our palm.

Towards the end of the Broadcasting, many of our soundtrack was specially instructing us to pass the phone to a specific person like Bao will have it saying “Pass the phone to ZiFeng. 3, 2, 1, DING!” and “Swap phone with Makoto. 3, 2, 1, DING!” this was done just to avoid confusion during our broadcasting rather than going “pass phone to the left” which is quite ambiguous.

Overall, there was a lot of planning and every details must be thought out carefully when I was making the tracks because every small mistake will affect our final presentation to not as good as it should be, I am lucky that I only made one mistake in the Master Command which was the direction where SuHwee and Makoto will be playing their Scissor Paper Stone with and we clarify it before our final broadcasting.

On the actual day of our broadcast.

The whole week will be raining according to the weather forecast so we did our final broadcast in the rain, luckily for us, the rain wasn’t too heavy and we could just film it in the drizzle. We started our broadcasting dry, and we ended our broadcasting wet.

We explored Botanical Gardens for a bit and decide the path each of us will walk and we walked it 3 times before the actual broadcast – the first time we walk when deciding where and how it will be like, second when we walked back, third was right before we started the broadcast as we walked 7 minute to our starting location so our first 7 minute into the broadcast will be us walking back the same path in the same timing.

we did a latency test by broadcasting a timer for two minute right before the broadcast and we could make some minor changes in our timing if there are some latency issue by calibrating the individual Master Controller to the latency before hand, but luckily for us, non of us were lagging and we had the best connection possible so there was no need to re-calibrate the Master Controller. Also, just to mention, Since Bao and I were having calibrated connection due to the previous Telematic Stroll (NOT Telepathic Stroll), he doesn’t have to calibrate with us again since I am doing it, so we filmed his phone ‘s timer.

Some recap of Telepathic Stroll:

 

our project inspirations.

Telepathic Stroll was highly influenced by BOLD3RRR and our lesson on Adobe Connect.

On the First glance, one can see the similarity in the association in Adobe Connect lesson with Telepathic Stroll, we had been pointing to the other broadcast, merging faces and trying to (act) interact with each others in the video wall just like the exercise we did during the Adobe Connect lesson,

This was the seed for our project, to live in the third space and interact with others and have a performance by doing things that is only possible in third space like joining body parts and point at things what seemed to be on the same space when its actually not in the first space.

In our discussion before the final idea, we had many different good ideas that was inspired from music videos like this:

In the end, we only used minimal passing of object (our face) in Telepathic Stroll and we grew our idea from a Magic trick kind of performance to an artistic kind of performance.

I feel really weird to have many of my project in this semester being inspired by BOLD3RRR, not in a sense that the style or the appearance or even the presentation, but in the sense of the preparation and the extensively planning before the Live broadcast as I always liked good planning that leads to good execution which I am really inspired by BOLD3RRR in this, especially in using pre-recorded track and incorporate it into a live broadcast to make it blends into the live aspect of it. This time, instead of using pre-recorded tracks and image over the broadcast like in our drunken piece (which was also highly influenced by BOLD3RRR), we evolved from this idea to using a pre-recorded track in the background for us to sync all of the movement even when not being able to see/hear from each other.

In our multiple occasions of class assignment which we went Live on Facebook, we figured out many limitations of Broadcasting

  1. If you go Live, you can’t watch other’s Live unless you have another device to do so and if you can’t watch others live while broadcasting, two way communication is relatively impossible.
  2. even if we have another device to watch, there will be minimum of 7 seconds delay.
  3. If you co-Broadcast, you can see the others in Live, but if we are doing a performance and yet we discussed through the co-Broadcast, since we can see each other and the viewer could also see it and this is not the effect we want in our final project.
  4. Co-Broadcast will have the quality dropped.
  5. There will be possible lagging in the connection that will cause the “!” during live and skips in the recorded playback. This must be overcome by all means.

This is why, doing individual broadcast with careful planning and our Master Command system will overcome all of the limitation we faced and with the calibration before the final broadcast, most of the problem were solved.

our idea of Social Broadcasting

In Telepathic Stroll, we are trying to present the idea of social broadcasting in a way that is like the real world social – the collective social,

In our individual broadcast, we could not know what the other was doing nor feeling, yet when we are placed together in a location(both the physical and third space), we could interact with each other and do amazing thing that individual can’t do. In Telepathic Stroll, even when just doing our own task without knowing the status of each other, by doing these individual task collaboratively, We united as a group forming as a single unit, every member is as important as all of us and if any of us were removed, the whole structure will collapse into a Problematic Stroll, nothing more than just an individual broadcast

If that wasn’t Social, what is?

Team Telapathic Stroll: Signing out.