Interactive Devices Final Project: Obsoleting Instruments Process 3(Final).

Continue from my previous Process 2.

again, I had progressed much further since that post, mainly in designing the workable electrical and mechanical system that could fit into the Telephone and to write the Arduino code up.

First, lets start with the Final Video!


Back into the Process

Since the previous post which I’ve roughly did the belt system that drives the music card into the laser reader, I had added the motor to the system and tried it, at this point, it seemed to work as I thought all I need was to slow the motor down and it will be alright.

After I cut the hole, I proceed to modelling the card slot and i took inspiration from the ATM just to have the user to have something they’ve experienced and know how to put the Music Card in without instructing them, since subconsciously, I assumed that they interacted with a ATM at some point in their life.

After the modelling which i am really happy with, I proceed to print it.

Since it was looking good, I went ahead and make a nicer LED system for it by soldering 4 LED(3 on the bottom and one for the top).

Next, I Epoxyed the speaker onto the bottom of the front belt drive since there is already a hole in the bottom shell for the speaker.

This is a 8 Ohm 0.5watt speaker that will be plugged directly into the Arduino.

I also Epoxyed the 4 LED into the card slot to prevent them from sliding around.

And came the soldering party.

It was at this point then I realized that if i reduce the speed of my DC motor to the speed of the music, I wont have enough torque to pull the card in..


After an afternoon of panicking and finding alternative motor or even thinking to redesigning my whole belt system….

I opened up the current DC motor to see if i could make modification by changing the spur gears to worm gear, which will increase torque and lower speed(after i did some research). but this require me to rebuild the whole gearbox as well as to remodel+reprint the whole of the front and back belt system.

And then I found that I have a longer DC motor with metal gears built into it and i tried to figure our if I can incorporate this gear box into my current system, which is also rather impossible as the ratio for this gear box is about 1:45. when I only need about 1:5 to 1:8. if i use this, I will have the belt driver running too slow. same goes for this, but this is 1:250… even slower.

So to solve this problem, I tried to get the medium speed which is faster than what the song should be and will stuck about 30% of the time and removed the buttons (which detects card when user insert into it that trigger the motor to turn the belt.) that caused more friction. And I also jump start the motor by making it to spin at full speed for half a second to break the initial force required when the motor is starting.

The messy configuration, components and wirings.

It took me some time to sort out these messy wiring and make sure that none of the wires interfere with the track that the Music card is going through.

after trying out the workable speed of sound and getting stuck by removing the buttons.

and after this, I tried to code the majority of the code together.

For this, I did not expect to work this well and I am really excited about it!

Towards the end of the project.

to make use of the original button on the phone, I’ve figured that the 12 buttons runs on 2 different circuit which I could simply solder these number together and make all the 12 buttons into one button, so nomatter which buttons the user pressed, it will be registered as one button pressed.

Because I cut off the Redial button on the phone to make space for my belt driver system, I epoxyed the Redial button back to the case as there are no PCB supporting it.

Some may wonder how did I make the Music Card..

I copied a few from online like Demons by Imagine Dragons, Harrypotter’s Hedwig Theme, and Pokemon Theme song, These were labeled on the card and those that weren’t labeled was What I composed myself. Since I have no music background, I did it by trial and error to give it a tune.

This was screen recorded when I tried to compose my 4th tune for this project:

after this was completed, I screen shot it and import into Illustrator to trace it into the Card layout which I made.

and this was how the cards were made.

Laser raster and cut in school on 2mm acrylic.

AND how about the voice command in 7 different accent?

well, this is relatively simple, just type whatever I want on Webbased Text to speech reader and have it read it out in different accent and edit them in premiere pro to cut them up to the exact same length(9 seconds) and put them into the SD card within the Obseleting Instrument’s MP3 Decoder.

I really like the Japanese and Korean accent, its really funny!

Why did I made it to speak different accent? It was to engage the user and make them feel like there was really life in the system where they called/receive call from a real person, like if they discussed with their friend and their friend said that there was a Indian accent while what they heard was the British accent, they might want to try Obseleting Instrument for a few more time. The accent there is there to add variables in the system.


In Conclusion

Throughout this Project, I’ve learnt many things like how to model objects in Tinkercad and make measurements properly, there are always failures in everything that I modeled before it works, and this is why 3D printing is a good prototype process where I printed it out and tested it to know if it work or not, if it doesnt, I will shave off some piece to see if it fits, if it does, I will make new measurements for the edited model.

I am really glad that this many piece worked well together and this was the biggest challenge.. since there are so many components working together (electrical and mechanical), even if one of the parts failed, it would not work as well as it is now. So I considered myself really lucky that the parts happened to work well even when there are misalignment everywhere.

Also, to have a Telephone case in the start and scale everything into the Telephone case was really a challenge especially at the start when I could not measure how big the internal was and could only make a guess and print some test print to try it out.

In this project, I realized that if I were to do a project that require multiple fields of knowledge like mechanical and electrical, It was better if I did not know how hard it will be, if I were to know that every part of the project will be something that I don’t know, I will be too afraid to jump into this project. I did something, realized that it doesn’t work and find solution to that single problem and proceed to work on the project and faced another problem, solving and learning one problem at a time lead me to the completion of the project.

Now that I had completed the project and looking back. Obseleting Instrument is really a complicated project as a whole, but thinking about it, I am just putting many small system into one project- like using one laser diode and a photo resistor as a switch,  playing a tune when triggered, a physical button to sense if the phone was picked up, using a relay to control circuits of different voltage, running two DC motor at the same time and so on… Obseleting Instrument is just a collection of small systems, which I personally thinks was what made my journey of doing this project really interesting because I explored the basics of these components and learnt a whole lot through it.

Robotic – Final Project Part 5(Final ) – The Curious & Timid Turtle.

Continue from my part 4 of this project updates,  This is the final part which leads to the completion and is rather long as many things from getting the power supply, modelling, testing, painting, assembling to coding had been done in the past 2 weeks.

First, the final video of The Curious & Timid Turtle.

And then, into the process….


At the end of last post, I was waiting for the power supply to reach and it did during these 2 weeks so I’ve tested it out, The one that I bought is a AC-DC converter at 6V 20A when all I needed was 6V 8A, i decided to buy the higher Amp one just if i need more amp in the future, and I could also share the supply to my Interactive device project during the End of sem show.I did some wiring afterwards.

And after the wiring, I did power test, it could easily run 8 servos simultaneously with nothing overheating. which is a great news for me!



Since the last post that I’ve decided to change the system in the legs to save space and shorten the overall length by stacking a smaller MG90S on the MG966R to act as a lever system to pull control the legs.
After testing this system out and seemed to be working, I merged it with the turtle leg that I’ve modeled in Zbrush, I don’t know about Zbrush before this project and it tool a long time just to model the shell, the legs and the feet.

I merged the leg with the previous test “rod-like” leg because I’ve already gotten the dimension in there so i just need to scale the Zbrush modeled leg accordingly to fit the “rod-like” leg.


changing of the entire base layout to reduce size and increase EFFICIENCY for the back legs.

At this point, I was wondering if I should change the MG966R(metal gears) into SG-5010(plastic gears) due to weight issue I might face later after adding the shell and so on, so I weigh the motors and decided that I should change the back leg to SG5010(but I changed it back to MG966R one day before submission due to the internal gears got loose in SG5010)

Major changes were made for whole layout of the base due to various reasons – the backleg will move differently from the front and since my project is basically a turtle with round shell instead of a flat one, it made more sense to use a smaller but higher layout rather than a flatter and wider one to make use of all the space within the shell.

This was quite the final base before I added the mount for the small backleg servo and the servo driver mount which will be attached it using screws.

Zbrush Modelling Nightmare Funfair

Since my Zbrush was cracked, it crash rather often and I did the same thing over and over if I forgot to save frequently and hence it took quite long to model anything, but well~~

As for the shell… it took so much time and crashes to get it right, because the shell cannot be too thin(unable to print later) and cant be too thick (too heavy) and I cant find a function that could allow me to see the thickness (like reduce opacity of the material to see through), so everytime to check the thickness, I have to export into STL, and import into Tinkercad to check the thickness.

Everytime I made major changes to the shell, I’ll have to import to Tinkercad to check the thickness and shape.

The model appear

Nothing is more satisfying in removing the support in one whole chunk(I did alot of cutting before this video so I can pluck out in one piece.)

this is the liner slider an an aluminium rod for the head system.

Printing for the base shell:

Printing for the top shell:

And then the finishing(PART1)of the model.


The Final mechanism of the turtle

testing out of the head slider and to mark the length I need it to move and cut.

The mechanism to slide the head that I will use after cutting the rod to almost the size I need.

this is the final mechanism for the head after I printed a small piece to prevent the wire from tilting too much when pushing the rod.

the head could be push and pull out nicely even before adding the string to control the tilt of the head.

metal rod were epoxyed into the head to tie the elastic thread to control the tile of the head.

and a elastic thread was added to counter the tension created by the elastic thread to turn the head

And then the finishing(PART2)of the model.

The final test of the turtle before I finally started coding it. The head uses elastic thread because it will go forward and retract, since I dont want anything to be loose and might interfere with the shoulder servo motor, I decided to use elastic thread so that the thread wont be loose when the head is retracted.

This is the almost completed sequence of action,the turtle’s movement is quite restricted due to the shell and the back leg is unable to push the turtle forward because of the shell’s restriction as well as the weaker servo(MG90S) which is responsible for the forward and backward thrust, while the MG966R is strong enough to lift the turtle up, so the turtle could do movements up and down but not walk.

After this, I added a few lines of code to substitute the button pressed with sound sensor to make the turtle more intractable and a few more actions that made it more timid looking (head peeking out to check the environment before the whole body come out.)

In conclusion:

Overall, I really liked this project although I kind of regretted trying to make a turtle during the process because the shell is giving a lot of layout and movability problem made me kept thinking that if I were to make something without shell, it will be so much easier, but the reason that I first wanted to make a turtle is the best is because the shell naturally hide all components that break the user’s perception that this is a robot and a turtle is my spiritual animal, now that I finished the project, I am really glad that I sticked to my initial idea of making a turtle and persevere through all the problems I faced(mainly hardware and mechanical problems which I changed my system and design so many times.).

This was what made into my final turtle with many component being edited into the next version.

With more time, I am sure that I could code the turtle be able to be scare back into the shell by another sudden loud noise – I tried to change the delay in my code to Millis multiple times and it do every action in the sequence twice so I sticked to using delay for now, which disabled me to write the activation code for using the sound as a trigger since it is in a delay loop. But, it looked really nice for now even if there is only one sequence of movement and I am glad that people were thinking that the turtle is cute when they saw it.


Interactive Device Presentation – Unconventional Musical Instruments.

The file size of this presentation was too large to be uploaded so I’ve screen recorded the presentation and uploaded it to youtube.

I’ve chose to narrow down my scope to just Unconventional Musical Instruments mainly because it will be more interesting and fits the time limit nicely.

Robotics Presentation – Biologically Inspired Robots.

The field of Bio Inspired robots is too wide and I think that we will benefit more by understand the different ways of how we could learn from the nature rather than narrowing the scope down and go deep into one of the sub division of bio-inspired robots.

The File size for the presentation is too large so I screen recorded the presentation in presenter’s view.


Device of the week 4 – GoBone

GoBone is a smart bone designed for the dogs and puppies and is available(currently out of stock) in Amazon at $200 currently and had raised $180k in the Kickstarter campaign in the past.

KickStarter Campaign


Basically, it is a interactive automatic dog entertainer that rolls around like a remote-controlled car where the dog will chase it around like a cat chasing a mice. The wheel could also store treats that the dogs could eat from it and according to the maker, apparently the user could rub peanut butter over the shell for the dogs to lick… which.. seemed like a bad idea to me because think about running peanut butter all over the house and the mess it will make.

Looking at the process and timeline for developing this device seemed really doable even for our standard. since it started it’s development in an Arduino Uno, two DC motor with gear box that is available from china at around a dollars each which I happened to have the gearbox that I purchased to test my final project out but did not use it in the end.

so the connection to the phone and developing an app for it seemed quite hard for us, but the technology in the GoBone itself seemed relatively simple, receive inputs from the phone, turn MotorA and motorB in the same direction to move forward or backwards and turn both motor in different direction to spin the dog bone.

Although it is said that GoBone was designed for the dogs, I personally feel that it is still designed for humans to be lazy. And a $200 for a “smart” dogbone is rather expensive, but I can understand that pet owners are more willing to spend on their pet and hence they make a good target audience to begin with.




Telepathic Stroll Final Project with SuHwee, Makoto and Bao

I really like the outcome of our Final Project very much.

This is the link for our video wall, Video Wall on Third Space Network

This is a Screen Record of the Video Wall on Third space Network.


And This is What I did before the wall was made in Premiere Pro to get a general feel of what it will look like and to get the exact timing to start each video in the video wall.


Lastly, this was my Individual Facebook broadcast, Just to re-emphasis on the point, what made our project really interesting was the video wall, every individual Broadcast doesn’t seemed like much, but when we are linked with the rest, we will produce a piece that is much more than itself.

Posted by ZiFeng Ong on Friday, 10 November 2017

we are the third space and first space performers.

The main purpose of our project planning was to begin with the end in mind – To present the four broadcast in a video wall that will be interesting for the viewer to watch, we will be the third space performers by having our broadcasts interact with each other in the third space. Our final presentation in class will be a live performance and I am not taking about the fact that everyone will watch what WAS BROADCASTED LIVE, nor all the audience in class will be watching it in THE PRESENT, but as the timing to click start each of the video must be precise to the miliseconds and will not be replicable when anyone play it again by themselves later as Telepathic Stroll will look vastly different if the time to click play each video are slightly different, so we are making a Live Performance by playing the four video in a way that we think it is for the optimum viewing sensation.

how did we perfected our timing even when not being able to see each other?

This is the reveal of our broadcasting secret! GET READY! CLEAN YOUR EARS(EYES) AND BE PREPARED! We called our system the….*Insert drumroll in your mind*


so… what is the Master Controller?

Basically, it is a well crafted sound track that is 23 minutes which consist of drum beats that worked like a metronome for syncing our actions and careful instructions to tell each performer what to do at which exact moment and every performer has a personalized sound track because we are doing different task at the same time.

How was it made?
It was made in Premiere Pro and a web based-Text to Speech reader and I screen recorded when it was reading to extract the sound from it into Premiere pro. The whole process to create the soundtracks took more than 18 hours and I’ve lost count of the time afterwards.


So how does it works exactly?
On the basic Idea, we will have the instructions preparing us and telling us what to do like “Next will be half face on right screen. Half face on right screen in, 3, 2, 1, DING!” and we will have our actions/change of camera view being executed on the “DING!” so that all our action will synchronize.

It started when we were at the meeting point where we started the sound track at the same time, counting down to 7 minute to walk to our respective starting point and wait for broadcasting to start, first, I will start broadcasting and then Makoto, SuHwee and then Bao on a 5 second interval, this was done so to allow us to have control when we play the four broadcast in the class to achieve maximum synchronization, if we started the broadcast at the same exact time, we could not click play on all four video in class which will result in de-synchronization. After started the Broadcast, we filmed the environment which was to remove the difference the timing from 30 seconds to 15 seconds depending on the time we started the broadcast and have everything happening afterwards at the exact same time.

Afterwards, we will have different actions at the same time, like when Bao and me entered the screen, Makoto and Suhwee will be pointing at us, this was done by giving different set of instruction in the Master Command at the same time, since each of us could only hear our own track and there will not be confusions among individual and clear instructions were given although I was confused for countless of times during the production of the Master Command because I have to make every command clear and the timing perfect, and it includes countdown on certain actions but not those repeated actions. The hardest part to produce was the Scissor-Paper-Stone part in the broad cast as everyone was doing different actions at the same time

at the end of the scissor paper stone segment, we all synchronized to the same action of the paper and Bao and I were counting on 5, so we all were showing our palm.

Towards the end of the Broadcasting, many of our soundtrack was specially instructing us to pass the phone to a specific person like Bao will have it saying “Pass the phone to ZiFeng. 3, 2, 1, DING!” and “Swap phone with Makoto. 3, 2, 1, DING!” this was done just to avoid confusion during our broadcasting rather than going “pass phone to the left” which is quite ambiguous.

Overall, there was a lot of planning and every details must be thought out carefully when I was making the tracks because every small mistake will affect our final presentation to not as good as it should be, I am lucky that I only made one mistake in the Master Command which was the direction where SuHwee and Makoto will be playing their Scissor Paper Stone with and we clarify it before our final broadcasting.

On the actual day of our broadcast.

The whole week will be raining according to the weather forecast so we did our final broadcast in the rain, luckily for us, the rain wasn’t too heavy and we could just film it in the drizzle. We started our broadcasting dry, and we ended our broadcasting wet.

We explored Botanical Gardens for a bit and decide the path each of us will walk and we walked it 3 times before the actual broadcast – the first time we walk when deciding where and how it will be like, second when we walked back, third was right before we started the broadcast as we walked 7 minute to our starting location so our first 7 minute into the broadcast will be us walking back the same path in the same timing.

we did a latency test by broadcasting a timer for two minute right before the broadcast and we could make some minor changes in our timing if there are some latency issue by calibrating the individual Master Controller to the latency before hand, but luckily for us, non of us were lagging and we had the best connection possible so there was no need to re-calibrate the Master Controller. Also, just to mention, Since Bao and I were having calibrated connection due to the previous Telematic Stroll (NOT Telepathic Stroll), he doesn’t have to calibrate with us again since I am doing it, so we filmed his phone ‘s timer.

Some recap of Telepathic Stroll:


our project inspirations.

Telepathic Stroll was highly influenced by BOLD3RRR and our lesson on Adobe Connect.

On the First glance, one can see the similarity in the association in Adobe Connect lesson with Telepathic Stroll, we had been pointing to the other broadcast, merging faces and trying to (act) interact with each others in the video wall just like the exercise we did during the Adobe Connect lesson,

This was the seed for our project, to live in the third space and interact with others and have a performance by doing things that is only possible in third space like joining body parts and point at things what seemed to be on the same space when its actually not in the first space.

In our discussion before the final idea, we had many different good ideas that was inspired from music videos like this:

In the end, we only used minimal passing of object (our face) in Telepathic Stroll and we grew our idea from a Magic trick kind of performance to an artistic kind of performance.

I feel really weird to have many of my project in this semester being inspired by BOLD3RRR, not in a sense that the style or the appearance or even the presentation, but in the sense of the preparation and the extensively planning before the Live broadcast as I always liked good planning that leads to good execution which I am really inspired by BOLD3RRR in this, especially in using pre-recorded track and incorporate it into a live broadcast to make it blends into the live aspect of it. This time, instead of using pre-recorded tracks and image over the broadcast like in our drunken piece (which was also highly influenced by BOLD3RRR), we evolved from this idea to using a pre-recorded track in the background for us to sync all of the movement even when not being able to see/hear from each other.

In our multiple occasions of class assignment which we went Live on Facebook, we figured out many limitations of Broadcasting

  1. If you go Live, you can’t watch other’s Live unless you have another device to do so and if you can’t watch others live while broadcasting, two way communication is relatively impossible.
  2. even if we have another device to watch, there will be minimum of 7 seconds delay.
  3. If you co-Broadcast, you can see the others in Live, but if we are doing a performance and yet we discussed through the co-Broadcast, since we can see each other and the viewer could also see it and this is not the effect we want in our final project.
  4. Co-Broadcast will have the quality dropped.
  5. There will be possible lagging in the connection that will cause the “!” during live and skips in the recorded playback. This must be overcome by all means.

This is why, doing individual broadcast with careful planning and our Master Command system will overcome all of the limitation we faced and with the calibration before the final broadcast, most of the problem were solved.

our idea of Social Broadcasting

In Telepathic Stroll, we are trying to present the idea of social broadcasting in a way that is like the real world social – the collective social,

In our individual broadcast, we could not know what the other was doing nor feeling, yet when we are placed together in a location(both the physical and third space), we could interact with each other and do amazing thing that individual can’t do. In Telepathic Stroll, even when just doing our own task without knowing the status of each other, by doing these individual task collaboratively, We united as a group forming as a single unit, every member is as important as all of us and if any of us were removed, the whole structure will collapse into a Problematic Stroll, nothing more than just an individual broadcast

If that wasn’t Social, what is?

Team Telapathic Stroll: Signing out.



Telematic Stroll to the Sunrise.

First of all, we did not want to use the Facebook Co-Broadcasting function as our goal of this Telematic Stroll is to test out the system for our final project(there will be four broadcasting at the same time and none of us could see what others are doing, yet we will form a piece together.) ANNDD MANNNNNNN~~ THIS WORKS SOO MUCH BETTER THAN EXPECTED!!

Telematic stroll

Posted by ZiFeng Ong on Monday, 6 November 2017

This was the Broadcast which I did and viewing it individually doesn’t seemed like much, its just a random walk-around video which wasn’t impressive at all.

But when linked it with Bao’s Broadcast… Things get a little more interesting.


  1. Both Broadcast were not edited in any way except for putting them side by side, if we create a video wall and play it directly, it will be the exact thing.
  2. The video were done on the spot through Facebook Live without any feedback of the other Broadcaster available to both of us, we only know what we were doing while unable to see the other party, even if we do, there will be a 7 second delay between broadcasting and watching live.
  3. In conclusion, This is Magic.

But how are we doing it? It’s a secret for now and we shall keep it until our final project. For now we are still improving our system and you should just take it as it is a MAGIC. TADAA!! A sense of wonder for a viewer is what we wanted to achieve. 

This is our one and only try. Most of the times we were uncoordinated and we did not discussed what to film beforehand while there were mistakes made everywhere throughout the broadcast and we did some of the mistakes at the same time which made it seemed intentional, making it even more magical.

To point out some of the wonderful MAGIC we did.

In this Telematic stroll, we had found out what problem there is during the live broadcasting and will improve it for the final project, also, it had been a really long time since I woke up this early.

When I left house, the sky was still dark and it reminded me of my future.


I really like this random shot before the Stroll.

After the stroll, when the sun finally rise.

we checked the timing for sunrise and it stated that it was 6.47a.m, but the actual sun came up only around 7a.m which is when we finished our Telematic stroll.

Luckily for me, right after we finished the broadcasting, it started to drizzle. And did I mentioned that we were supposed to do on Monday morning and we woke up at 5a.m. and it was raining so we postpone our stroll, it was really lucky for us that it wasn’t raining on Tuesday morning even when the weather forecast said would.

Finally, I am REALLY impressed when I put the video next to each other, they worked really well and there is improvements that could be made and we all learnt from it.



Networked Conversations With Second Front Review.

This was a eyeopening Network Conversation with five of Second Front’s member – Bibbe HansenLiz Solo, Doug JarvisPatrick Lichty and Jeremy Owen Turner  This Conversation was HIGHLY advertised on Facebook by Prof Randall, and I found it really funny that the artists(Patrick Lichty) commented that they sensed danger in it and Liz Solo and Bibbe just found that comment funny and went along with it.

During the live stream, when there was a technical error when Prof Randall disconnected as a host and many disconnected together, I was lucky to remained in the chat room with a few of artists still streaming, they were really excited about the technical error, they Laughed Out Loud and being really thrilled about the fact that “Every where they go, they would crash the system.” JUST WHAT HAD THEY DONE IN THE PAST? I am also really amused with Bibbe with her animated personalities that she felt like a 8 years old kid in a 60 years old body where she gets excited fairly easily.

It is really interesting to see that the artists viewed their Second Life character (Avatar) sometime as themselves and sometime as a separate entity where they would mention what they did in their performance with “I” but also gave third person introduction to their Avatar during the introduction in the Network conversation. I guess they see the body of the Avatar as a separate being but shared the psychological, intellectual, emotions, conscious and all intangible aspect that made us human with the Avatar, this really seemed to be similar with the movie “Avatar”(2009 film) where the “Blue Avatar Thing” shared everything with the main character except for the physical body, and when Second Front logged out of the Second Life, the character will be inanimate and lose their “body” until they logged in.

On the upper half of the poster are the faces of a man and a female blue alien with yellow eyes, with a giant planet and a moon in the background and the text at the top: "From the director of Terminator 2 and Titanic". Below is a dragon-like animal flying across a landscape with floating mountains at sunset; helicopter-like aircraft are seen in the distant background. The title "James Cameron's Avatar", film credits and the release date appear at the bottom

They also mention about the identity within Second Life where they did a work in the past by interviewing the people in real life and recreated the information they gained and place it directly into Second life, They would also change their Avatar appearance to replicate the other member’s Avatar and place a tag with the name of the character they copied and act as them for some time, also to add to this identity within Second Life, Second Front found it really interesting to place unsuspecting Real Life people like Andy Warhol into Second Front’s performance, as mentioned

“There was a Fluidity of Who was Where and Who Was What In Second Life”

Anyone could be anyone at any place, there was no clear boundary of what you can be in Second Life, this was what made Second Life so intriguing to a point that people lost themselves in the game.


Throughout the Network Conversation, Second Front talked about topics which were really got me thinking about the third space performance, as they are moving towards the VR, Patrick Lichty also mentioned the possibility that the neuro-plasticity might takes hold, if we gets too engrossed with the Virtual Reality, our brain may be rewired to evolve along with the technology and they briefly mentioned something about Neil Harbisson (I think, my ear-to-brain-to-hand function were not fast enough to write it down, but Patrick mentioned about some guy and I think its Neil Harbisson after I googled) where his brain was rewired because he uses sense that we would otherwise not use in that context, This is why the VR headset will someday affect our visual cortex, and by using VR, isn’t it just putting a screen to the third space right in front of our eyes and transport us directly into the third space? The future will be amazing for sure.

Lastly, I would like to agree the Network Conversation (I cant remember nor did I write down who said it again, and this is not the exact words but what I interpreted and summarized) that

“Performance is all about the body, what if we take the body out, we have emotions, we have feeling.”

When we are in the Third Space, it is real as there are all of what made us human, except the physical body. These panels(the Adobe Connect grids) is a great example of bringing people together through Third Space.”

If there are anything that I learnt from the Network Conversation, It further emphasis that the third space is a very new platform for human to be in and we are still in the process of integrating into it. It may take some time before our brain complete the rewiring so that we can have the thirdspace as a new human sense through the “Evolved neuro-plasticity” which might not even happen, but while the possibility are endless, so why not just learn from the Second Front and to have fun in doing things we all like while enjoying the process of creating something regardless of the end result but only focus on the enjoyment of process.

Nonetheless, the Second Front enjoyed and had fun in the Network Conversation!


Robotic – Final Project Prototype Part 4 – Unlimited Revisions.

There is only testing and remodeling, testing and remodeling after i went to simlim to look for a battery 2 weeks ago.

After that, the battery could also power no more than 3 servo at once so I purchases some AC-DC 6V 10A converter online and had been waiting for its arrival since then.

meanwhile waiting, all I did was re-model almost everything again and again…. and I also downloaded ZBrush- a 3d sculpting software and learnt it just for me to sculpt the turtle appearance since i have to learn it sooner or later and luckily I tried it now so I know that there are actually a lot of problems in my current structure when place in the shell I modeled afterwards.

After last post, I’ve built a bigger base mount for the turtle.

This was originally the rough full size for the turtle base, Version1, Version 2 with 2 arm motor attached, and Version 3 with no Arm motor attached.

And I downloaded Zbrush and started to learn how to sculpt something in it and MANNNNN IT IS HARRDDDDD

And then I printed it out in smaller scale to see how it feels.

And to check which layout should work.



and since after I did more research on turtle, I realize that the bone structure of the turtle wasn’t straight and so I tilted the motors to give it an angular tilt for it to move more like a turtle.

then I tested the structure of this configurations.

I also modeled a arm motor connector and found out that this is WAAAYYY to long for it to work properly, and another problem is that there is still not enough tilt, so I changed from a 45 degree tilt to a 22.5 degree tilt.

This is the Version 4 and Version 5 of the motor mount, I’ve decide to make it higher so to be able to turn the arm in the correct direction, I’ve also tried to add the middle servo which will control the turtle head retraction and head/tail turning.

This is with Version 5 base and version 2 Motor connector which are much shorter.

Next, I further edit the turtle shell and upscale it to see how it fits into my system, however i found many problems here like it doesnt fit well. Since I can make adjustments to the layout, I decided to use another method to do the leg mechanism.

By combining this lever system straight into the leg.

The 4 generation of motor connector.

This should be the mechanism I will use for the leg, I hope it works.

for now, I have to test them out again.

Final Project Rehearsal

Posted by ZiFeng Ong on Tuesday, 24 October 2017

In our final project rehearsal, we are focusing more on the grid appearance and the technical aspect of the overall feel, we also discussed and understand the flow through this rehearsal.


In our discussion, we decided on a few combinations like what to film and how should film it, also the passing of phone is rather tricky because we are afraid that we will drop the phone that doesn’t belong to us.  there are many more fine tuning needed to be done, but for now, we will see what these 4 piece will look like together and decide what combination will we do.


This is the segment where we pass the phone to the next grid, we hope to achieve an effect that feels like we are moving in a straight line instead of passing in circle, we will only know the final effect when we view it from the grid.

and this, just one video alone wont have any effect, but we hope to achieve a panorama spin effect through this when the grid is out. 

We are just as excited because of the fact that we doesn’t know how the true effect of these rehearsal will be like and we are waiting for a surprise which may or may not be what we would like it to be. The first time we see the final outcome will be the first time our class sees it.