FYP Process 9: The Real Built, long process for making the head.

After a long time of not updating, I’ve started to built the actual head for the robot using EVA foam, which was my first time using it. The reason I am using EVA foam is that it is lightweight yet firm, easy to cut and form into shapes, and the joining of parts only requires contact glue (other glue will suffice but contact glue gives a better stick). Plus, I know that I will be building as and conceptualizing at the same time as I do not have any blueprint, every little details on the robot will be an impromptu decision I make during the building process, so low price of the EVA foam will allow me to work more freely so that I do not have to be afraid of making mistake and paying a huge sum of money on it.

To start it off, I converted the paper Husky head into EVA model with 10mm thick EVA as a base layer.

Planning and cutting the paper piece out to form a template with the consideration of how much EVA foam can bent so that the final product could form (10mm thick EVA foam bend drastically differently from paper so I have to keep this in mind when forming the paper template)

The foam is cut with slanted edge to form an angular joint with the pieces at the side, the curves were made by heating up the EVA foam and bent into place, all connections were made with contact cement to make sure it will last overtime.

the blade need to be sharp consistently for a high quality cut so the blade is sharpened after a few cuts.

Test piece were made and resin was applied to test the result of it and to strengthen the EVA foam to make sure they last as long as possible.

To learn and understand the materials fully, the resin-ed test piece were sprayed with black model paint to understand how they will look like with different variable (how the cut affect the outcome, how thick the resin affect the appearance and so on)

details were then cut and added to the head , all using EVA foam with knife, contact cement and hot air gun.

after the front details were added, I’ve dissected the head to add a magnet system that ease the maintenance in the future if it is required.

After which, eyes were added in to test the appearance of the head, the eyes were specially installed in a way that it will give an optical illusion that it is looking at the user no matter where the user is standing.

And then, more details were added to the front head.

The front were coated with more than 8 coats of resin and sanded which helps the paint to stick onto the head. (this whole process took about 2 weeks as the resin require time to be fully hardened between the layers. plus coating them while prevent dripping and sanding is a labor intensive process)

and then, the sanded head were airbrushed with primer (to help the paint stick to the head, and also help me to see the surface quality like checking of small bumbs)

After a few primer coat and sanding between the coats, the head were sprayed and mask to create a clean and beautiful spray job.

for the back of the head, details were built with laser-cut parts while LED were soldered and installed.

then, like the front, details were drawn and cut out from EVA foam and sticked onto the back of the head with contact cement. the LED on the back were also tested to see which colour combination looks the best.

As same with the front, the back were resin-coated with many layers and then sanded and primed, meanwhile, for the internal structure, speakers were soldered and installed inside the ear with properly placed velcro to help with maintenance in the future.

for the front of head, acrylic pieces were cut and melted with hot air gun to provide a protective cover for the eye, which two layer of tinted coat were sticked on it to darken the overall feeling and gives it a more compelling eye to the dog head.(and it affect greatly on the quality of the eye on camera, which I hope people will be taking photo/video of it during the FYP show.)

afterwhich, the dog head were masked and sprayed in many layers with model paint.

towards the final after the base colours were masked and sprayed, the corners were touched up with small brush. Water decals from toy models were added to the head to give it a more interesting finishing and make the finishing completed. As in my research, I found that small details is the key factor that differentiate a normal artwork and a insanely impressive one.

For the appearance of the head, lastly, it was sealed off with two thick coat of samurai lacquer spray paint and then airbrushed with a model grade matte clear coat to remove the shine from the lacquer paint (which is too glossy and look like it is a plastic toy.) small details like carbon-fiber vinyl were sticked to the side of the head to finish it off and gives different texture to the dog head.

The process of making the head from start to finish was long, but I’ve learnt many skills as this was my first time using EVA foam, so everything was a good lesson, even from the small details like “how to cut EVA foam properly” is an invaluable lesson for me. And throughout this process, I also learnt that what materials could or could not go with the other, and also how to cause the paint to stick onto the resin firmly and not peel off.



FYP Soul – Why?

Why robot? Why Guiding robot? Why a whole system including a company, a backstory of how the Robot came to FYP exhibition?

First thing first, Why am I building a robot for FYP?

The answer is much more than just because I like it (and of course I do!)
Culturally, there are two opposing opinion of a robot – A western one which threaten us by stealing our and eventually bring us to annihilation, A Japan one which is seemed as hero and seemed to enhance the quality of life, since 16th century after the invention of karakuri puppet, the Japanese enjoys seeing something moving automatically and it is still really fascinating to see something that moves by itself now as we anthropomorphize the object unconsciously. I personally think that robotic will be the next advancement to the world as our computation power increase exponentially, the only physical way we could bring these newer technology into a good use is through something that uses technology and have a physical/tangible characteristic, just like a robot, albeit the term robot was loosely used, the general idea is similar- Physical object that moves without human through a set of pre-determined  protocol.

so, why specifically a guiding robot?

This is because I want to be of some use to our FYP batch, Guiding robot’s main purpose is to serve just one function- to bring the visitor to a student booth, which will increase the exposure of the student. Even if throughout the whole show duration, my robot only managed to bring one visitor to one student’s booth and the visitor enjoyed the booth, I would consider my FYP a success as I helped someone(visitor or student) to experience the FYP show in a slightly better way.

How about a lost guiding robot?

For now, I will be building a lost guiding robot which need the visitor to help to locate the student’s booth, although it seemed counter-intuitive to make a LOST Guiding robot as the worst thing that a guiding robot could do is to get lost, however when I go the opposite way(metaphorical), the end result still serve the same function, a robot which guide(narratively, it will be guided) the visitor to the booth. This way, the user experience/interaction with the robot will be different as they will feel like they will be helping the poor robot to find and complete it’s task and the visitor will feel like they have a sense of duty/accomplishment when finding the booth.

How does this work?
All of these stems from the word “Altruism“- the belief in or practice of disintegrate and selfless concern for the well-being of others. In this case, sacrificing the user’s own time to help a random robot.(which by logical thought, they do not need help and does not have feeling, however human is a complex thing and probably will not do things by logic)
As helping others will give us a sense of purpose and satisfaction, I will want to instill this idea into my project to make the user to feel like they are really helping the robot and feel the satisfaction when they complete the task(which in turn makes a happier visitor and a memorable experience for them.)

Why a whole system including a company, a backstory of how the Robot came to FYP exhibition?

This is to adapt the power of fictional narrative to change people’s attitude towards social change(robot in FYP exhibition) by using the method of narrative persuasion- a method that uses narrative transport to persuade us to change our mind, behavior and see the world differently and to put things into context even when the story is a fantastical.



Research to be done:
Interaction of human and robot
Social Robot
Programmed behavior
Slot machine reward system







Interactive spaces: Light and Darkness Ver2 Analog + Digital

The final Video:

During the process of adding digital to the analog version, there were more failure than success done throughout, let me start with these failures (and additional works I did which were not used in the end)

The unused Animations

Right after the completion of the analog version, I thought of making a projection of animation from the back of the candles onto a sheet of translucent paper sticked behind the candle shelf, I tried to learn how to make an animation and so I asked my animation friends what program to use to do a simple animation and they suggested Autodesk Sketchbook, so I downloaded it.


The Circuit that works,(or didn’t)

And then the Fairy light I bought from china arrived! I bought 220V fairylight because they are cheaper, and I did not expect them to be these problematic to deal with due to the dimming issue and also 220V = risky.

The additions to my “Dark Room”

Coding is a nightmare

as there are 297 candles, the coding to split each candle was simple but tedious, I am sure there are better ways to do thing, but the downside of using max msp was that the exact function to do things the better ways is really difficult to find, so… my mindset was “If I only know “IF”, I can also do unlimited amount of task, “IF” is as powerful as Hercules.” so… I had “IFs” my way through this project, literally. (just to be clear, I tried to find a better ways, and though I found them, but in the end it either doesn’t work or crashed my max msp.)


HEPHAESTUS SYSTEM FYP Presentation slides + speaker notes

Google Drive Download Link 


FYP part 5- Hephaestus Systems Planning + Software learning + Modelling + Gantt Chart

To understand what to plan for, I would need to understand the nature of the project. As the task for completing a physical + mechanical project differed far from a virtual-screen-based project like games and visuals. There will be more restrictions in doing a physical project than virtual one due to the nature law of physic, material and cost.

Money Problems:

As for building a few robots will cost me quite some money, budgeting will be even more important than time planning, as for where the monies come from, I will probably save up from selling things online and work studies and treat it like a commitment because I think that no one is forcing me to do anything and its all my resolution to fund my own project.
I had thought about asking for sponsorship and that may even happen if I have to. (especially for the batteries that I will be using in the robots, these little things must be of great quality due to safety issues while a good + durable + high capacity + low weight battery cost about $500 and up each and I would need at least 3(excluding spares), which I am totally unable to afford.)

During this few weeks, I had been learning Blender (3D modeling Software) from scratch, it is really difficult to pick up, but i think the potential of Blender is far beyond what I need, so I will stick to learning this super useful program.

I’ve tried to follow a few tutorial and learn the basic of Blender from youtube, this was my first blender experience in building a 3D model.

I stated to learn by building chest as it got similar shape of what I want to produce, and after this, I used the skills i learnt from here and applied into the attempt of my R1 Robot.

the overall shape of this is rather similar to the chest so it took a while to get used to this, however after building this, I realized that I don’t know how to make the top of the robot. So I progressed into another tutorial.

and this was the over shape that I made and I am pleased with it for a first timer effort, although it took me 2 days to reach here, after this, I continued to build the details at the side and front…

Side view

Front view

and I decided to make it like a production poster so I rendered another isometric view to make it to looks legit for my presentation.

In blender, there is a animation function and I thought it would be really cool if I can learn it, so I went ahead to learn it from an online tutorial and produced this.

After I feel that this is good enough for the presentation, I tried to 3D print the model out, it was then then I realized that my model was full of mistakes and it only look good, in actual fact, the surface of the robot was really badly made. So the effort I made in this 3D model got to the furthest here, I will definitely be modeling everything again for the actual robots that I will be building for this FYP as this model doesn’t work, however this was a good learning experience for me and understand that I need to build the model’s surface properly.

the surface detail was not able to be printed due to the mistake I made during modeling, which creates an non solid surface and therefore not printable.

and then it was attached to a small remote control car for proof of concept to be used during the presentation.

now, the Crowd Favourite…..
Mr Gantt Chart!

I started the Gantt chart at 1st of April 2018 as it was all research done till this point.

Within each task, there will be multiple small task which falls into the same category and I will explain them as well as a short description of what it is about here in this post.

Since my project will be physical+mechanical+ technological+ I need to get student’s FYP work early, it is really important to start the execution really early and throughout the holiday because building of the actual robot and troubleshooting the system will take quite some time, and I have to ask all FYP student to submit their work to me really early to make everything work.

Research 1 (1st -30th of April):
I think this is the most important factor of the project, good research done here will help me to reduce work greatly in the future.
Researches up till this point (16th April) – Similar Existing Products, potential parts, platforms for interaction, things I have to learn, Inspiration of artistic works, Parts price comparison, Target market and segmentation, 

Skills Acquiring ( 10th April – 8th July):
There are many lacking knowledge and skills required for me to complete this project, like the list of things to learn, I need to learn a few of them to make sure that my system could work. Also, I need to pick-up 3D modelling skills as the knowledge I have now is insufficient, In the past few days, I’ve started to learn Blender, which is a free software for 3D modelling and is great for my project, still, time is needed to hone the skills hence the long period of time allocated to learn these skills.

Initial Purchases (25th April – 27 April):
One of the biggest way to save money is to purchase them from china, which will take weeks for the item to arrive, hence it is really important to buy research about the parts required and buy them early to use the least money for the best result. Also, initial purchases is set to 25th because I will be presenting on that day, if no major changes were made, I could only really consider what to buy after that.

R1 Prototype (Software and Hardware) (2nd May- 8th July):
This will go hand in hand with skills acquiring, as I need to have a goal of what exactly to learn, it is the best to do while learning and learn while doing. R1 is the first Robot that I will be building of the set of 3, which will be the bare-bone of the Robot basic functionality and act as a confirmation to the general systems and parts requires to build R2 and R3.

R1 Movement System Finalization (18th June – 29 June):
As movement is really difficult task to achieve while concerning about the safety of the people and booth, (It is really easy to make something move, but it is much harder to make it move while not destroying things.)
so I gave more time for me to think about how I will achieve this.

Research 2 ( 20th June – 4th July):
When I think of robots, I will think of Japan, maybe its just me since I was influenced by the robotic culture of Japan when I was young, so I will travel to Japan during this period to experience their advancement of robotics first hand. (Place which I will visit :National Museum of Emerging Science and Innovation (Miraikan)Unicorn Gundam in Odaibabot at Haneda AirportHenn-na Hotel, Robot Restaurant (not sure about this).

R1 Movement Prototype (4th July-23rd July):
Start to prototype right after I am back from Japan from experiencing their robots and hopefully get to see how they works in Japan.

R1+R2+R3 Concept Generation and Refinement( 4th July -27th July):
Since by this point I’ve already understand what parts R1 requires and already have the measurement of parts and sizes like motors and screen size, I could think about exactly how each Robots will look like as they will look different and have different functionality.

R1+R2+R3 Secondary Purchase (27th July – 29th July):
Knowing what parts each robot needs, I could finally purchase the basic parts for R2 and R3, plus the add-on function for all 3 robots(each robots have different functionality so require different parts)

3D modeling(Aesthetics) (27th July – 20th August):
This will be the final appearance for all 3 robots, 3D modelling done in blender.

R1 Prototype(Aesthetics+ Software + Hardware) (20th August – 24th Sep):
3D printing of all R1 modeled parts, fix them together and make sure the software and hardware works, if it doesn’t, edit and reprint of the parts.

R1 Prototype Trial and Testing ( 24th Sep – 1st October):
when all parts work together, test the robots and system in a location to make sure everything work as expected and fine tune.

R2 and R3 Prototype V1 (Software and Hardware) ( 1st October – 5th Nov):
Since the primary component and system of R2 and R3 is the same as the already working R1, these 2 robots will require lesser time and the main portion of this 2 robot will be in 3D printing and executing the different function in them.

User Interaction Trail and Testing ( 5th Nov – 19th Nov):
Testing and making sure that there are no major bugs in the system, touchscreen and functionality works well.

FYP Student’s Work Collection 1 (20th Nov – 1st Dec):
At this time, all 3 Robot can roughly work and I’ve already document these robots, so instead of just verbally telling them I will help them in making their FYP better, It will be more stimulating if I show them a system which already work and ask them to prepare a document for this system for their own benefit. (it will not be easy to ask people to do extra work, so I need to make sure that I sell my Idea to them really early*That’s 1 semester before the end of FYP* by making these robots cool and they will be losing out if their work is not in the systems.) Also, at this point they don’t have to send me any work and it is already the semester break so they have some time to think about what they want to prepare for the systems.

Booth System Conceptualization.( 20th Nov – 3rd Dec):
By this time, I should have the system of the robots working and I need to incorporate that into booth for our FYP, and this will probably be the time which we will know where our FYP will take place(in school or in public) and this will change how the booth system drastically, so it will be better to place this at the end of semester break.

Booth System Prototype (Software + Hardware) (3rd Dec – 14th Jan):
after the conceptualization, prototype will come next and I hope to have this done before the start of semester so that I will have the fully working prototype done and having the whole semester to polish my work, troubleshoot and bug fix.

Software and hardware Refinement (14th Jan – 1st April):
Software and Hardware refinement will take up most of the time as the real problem will usually emerge at this point of time where some shortcoming of the project will be apparent, also, there might be good suggestion/advice by people along the way and this will be the time to incorporate these wonderful suggestion into the project.

User Interaction Testing 2(1st April – 8th April):
Testing of the final system, to make sure all parts and component work as it should. if problem found, atleast there are time to replace these components.

FYP Student’s Work Collection 2( 1st April – 1st May):
The final collection of (Hopefully) all of the student’s work and adding them to the system once it’s collected. At the very least, there will be the basic information of every student which is uploaded to the FYP website.

Aesthetic Refinement( 8th April – 29th April):
The polishing and painting of the 3 robots and making of props/items for the booth. (when all software and hardware is working)

Booth Preparation & Stylization (1st May – 8th May):
Production of prints for booth, name cards/postcards and such.

FYP Show Preparation. (8th May – 10th May):
The actual preparation of the Booth and to bring the robots down to the exhibition area and set everything them up.

FYP SHOW( 10th May- 20th May):
Make sure the show runs smoothly, on-site repair if needed.



FYP IDEA: To create a system which will benefit the FYP students and gives the guest an improved visiting experience.


component (pricing, compatibility, functionality, component sponsorship(especially for battery))

softwares (research for platform + udemy courses)

movement (sensors, moving system + hardware) (main moving calculation should be done on the booth computer and transmit over to robots due to power issue, (more computation power it runs, more power it will draw)



Interactive spaces: Light and Darkness Ver.1 – Analog

after the previous process, process 3

Thanks to Tiffany and Joan for the awesome snippets that were used as the featured image as well as in the videos, and helping with lighting up all the candles + documentations.

Also Thanks to Bao, Fabian and Suhwee for helping me to place all the candles onto the candle platform.

I’ve seen the actual project with real candles in the dark room the first time as the class saw it.

Light & Darkness was created to give a sense of alternative space within an ordinary space, in this project, I wanted to create a space so that the audience will forget that they were in the IM room, the room was build with pure darkness and stability in mind as nothing should collapse as I wanted to hang the candles platform instead of building a table-like structure to give a sense that all the candles were floating in space(which is also the reason why everything, including the candles platform was black), this worked really well as the candle platform would swing gently which increased the magical feeling to my space, Also, all of this will be enhanced in the digital version as I would add a series of stars(LED, not projection) to the room’s wall to increase the feeling of enchantment. The smell of the candles, as well the burnt smell created when extinguishing the candles worked much better than I expected, as I wanted to have smell that is not too overpowering, I’ve used 239 unscented candles with 60 scented candles, this combination worked like a wonder because there is only a faint smell outside of the dark room. another point that worked better than I expected was the random placement of the scented candles created a colourful candle array when they were all lighted up and this increased the overall aesthetic of this project’s analog version greatly.

After this analog version, I’ve noticed a few points to take note of, it is actually quite hard to light up all the candles so more lighters will be needed and the current lighter that I was using appears to be rather difficult to be used, so I’ll probably buy some more lighter which is much easier to be operated and uses lesser strength. Also, I would have to paint the metal wires that was used to hang the candles platform into matt black as the metallic would reduce the magical floating aspect of this project.


Interactive Devices Final Project: Obsoleting Instruments Process 3(Final).

Continue from my previous Process 2.

again, I had progressed much further since that post, mainly in designing the workable electrical and mechanical system that could fit into the Telephone and to write the Arduino code up.

First, lets start with the Final Video!


Back into the Process

Since the previous post which I’ve roughly did the belt system that drives the music card into the laser reader, I had added the motor to the system and tried it, at this point, it seemed to work as I thought all I need was to slow the motor down and it will be alright.

After I cut the hole, I proceed to modelling the card slot and i took inspiration from the ATM just to have the user to have something they’ve experienced and know how to put the Music Card in without instructing them, since subconsciously, I assumed that they interacted with a ATM at some point in their life.

After the modelling which i am really happy with, I proceed to print it.

Since it was looking good, I went ahead and make a nicer LED system for it by soldering 4 LED(3 on the bottom and one for the top).

Next, I Epoxyed the speaker onto the bottom of the front belt drive since there is already a hole in the bottom shell for the speaker.

This is a 8 Ohm 0.5watt speaker that will be plugged directly into the Arduino.

I also Epoxyed the 4 LED into the card slot to prevent them from sliding around.

And came the soldering party.

It was at this point then I realized that if i reduce the speed of my DC motor to the speed of the music, I wont have enough torque to pull the card in..


After an afternoon of panicking and finding alternative motor or even thinking to redesigning my whole belt system….

I opened up the current DC motor to see if i could make modification by changing the spur gears to worm gear, which will increase torque and lower speed(after i did some research). but this require me to rebuild the whole gearbox as well as to remodel+reprint the whole of the front and back belt system.

And then I found that I have a longer DC motor with metal gears built into it and i tried to figure our if I can incorporate this gear box into my current system, which is also rather impossible as the ratio for this gear box is about 1:45. when I only need about 1:5 to 1:8. if i use this, I will have the belt driver running too slow. same goes for this, but this is 1:250… even slower.

So to solve this problem, I tried to get the medium speed which is faster than what the song should be and will stuck about 30% of the time and removed the buttons (which detects card when user insert into it that trigger the motor to turn the belt.) that caused more friction. And I also jump start the motor by making it to spin at full speed for half a second to break the initial force required when the motor is starting.

The messy configuration, components and wirings.

It took me some time to sort out these messy wiring and make sure that none of the wires interfere with the track that the Music card is going through.

after trying out the workable speed of sound and getting stuck by removing the buttons.

and after this, I tried to code the majority of the code together.

For this, I did not expect to work this well and I am really excited about it!

Towards the end of the project.

to make use of the original button on the phone, I’ve figured that the 12 buttons runs on 2 different circuit which I could simply solder these number together and make all the 12 buttons into one button, so nomatter which buttons the user pressed, it will be registered as one button pressed.

Because I cut off the Redial button on the phone to make space for my belt driver system, I epoxyed the Redial button back to the case as there are no PCB supporting it.

Some may wonder how did I make the Music Card..

I copied a few from online like Demons by Imagine Dragons, Harrypotter’s Hedwig Theme, and Pokemon Theme song, These were labeled on the card and those that weren’t labeled was What I composed myself. Since I have no music background, I did it by trial and error to give it a tune.

This was screen recorded when I tried to compose my 4th tune for this project:

after this was completed, I screen shot it and import into Illustrator to trace it into the Card layout which I made.

and this was how the cards were made.

Laser raster and cut in school on 2mm acrylic.

AND how about the voice command in 7 different accent?

well, this is relatively simple, just type whatever I want on Webbased Text to speech reader and have it read it out in different accent and edit them in premiere pro to cut them up to the exact same length(9 seconds) and put them into the SD card within the Obseleting Instrument’s MP3 Decoder.

I really like the Japanese and Korean accent, its really funny!

Why did I made it to speak different accent? It was to engage the user and make them feel like there was really life in the system where they called/receive call from a real person, like if they discussed with their friend and their friend said that there was a Indian accent while what they heard was the British accent, they might want to try Obseleting Instrument for a few more time. The accent there is there to add variables in the system.


In Conclusion

Throughout this Project, I’ve learnt many things like how to model objects in Tinkercad and make measurements properly, there are always failures in everything that I modeled before it works, and this is why 3D printing is a good prototype process where I printed it out and tested it to know if it work or not, if it doesnt, I will shave off some piece to see if it fits, if it does, I will make new measurements for the edited model.

I am really glad that this many piece worked well together and this was the biggest challenge.. since there are so many components working together (electrical and mechanical), even if one of the parts failed, it would not work as well as it is now. So I considered myself really lucky that the parts happened to work well even when there are misalignment everywhere.

Also, to have a Telephone case in the start and scale everything into the Telephone case was really a challenge especially at the start when I could not measure how big the internal was and could only make a guess and print some test print to try it out.

In this project, I realized that if I were to do a project that require multiple fields of knowledge like mechanical and electrical, It was better if I did not know how hard it will be, if I were to know that every part of the project will be something that I don’t know, I will be too afraid to jump into this project. I did something, realized that it doesn’t work and find solution to that single problem and proceed to work on the project and faced another problem, solving and learning one problem at a time lead me to the completion of the project.

Now that I had completed the project and looking back. Obseleting Instrument is really a complicated project as a whole, but thinking about it, I am just putting many small system into one project- like using one laser diode and a photo resistor as a switch,  playing a tune when triggered, a physical button to sense if the phone was picked up, using a relay to control circuits of different voltage, running two DC motor at the same time and so on… Obseleting Instrument is just a collection of small systems, which I personally thinks was what made my journey of doing this project really interesting because I explored the basics of these components and learnt a whole lot through it.

Robotic – Final Project Part 5(Final ) – The Curious & Timid Turtle.

Continue from my part 4 of this project updates,  This is the final part which leads to the completion and is rather long as many things from getting the power supply, modelling, testing, painting, assembling to coding had been done in the past 2 weeks.

First, the final video of The Curious & Timid Turtle.

And then, into the process….


At the end of last post, I was waiting for the power supply to reach and it did during these 2 weeks so I’ve tested it out, The one that I bought is a AC-DC converter at 6V 20A when all I needed was 6V 8A, i decided to buy the higher Amp one just if i need more amp in the future, and I could also share the supply to my Interactive device project during the End of sem show.I did some wiring afterwards.

And after the wiring, I did power test, it could easily run 8 servos simultaneously with nothing overheating. which is a great news for me!



Since the last post that I’ve decided to change the system in the legs to save space and shorten the overall length by stacking a smaller MG90S on the MG966R to act as a lever system to pull control the legs.
After testing this system out and seemed to be working, I merged it with the turtle leg that I’ve modeled in Zbrush, I don’t know about Zbrush before this project and it tool a long time just to model the shell, the legs and the feet.

I merged the leg with the previous test “rod-like” leg because I’ve already gotten the dimension in there so i just need to scale the Zbrush modeled leg accordingly to fit the “rod-like” leg.


changing of the entire base layout to reduce size and increase EFFICIENCY for the back legs.

At this point, I was wondering if I should change the MG966R(metal gears) into SG-5010(plastic gears) due to weight issue I might face later after adding the shell and so on, so I weigh the motors and decided that I should change the back leg to SG5010(but I changed it back to MG966R one day before submission due to the internal gears got loose in SG5010)

Major changes were made for whole layout of the base due to various reasons – the backleg will move differently from the front and since my project is basically a turtle with round shell instead of a flat one, it made more sense to use a smaller but higher layout rather than a flatter and wider one to make use of all the space within the shell.

This was quite the final base before I added the mount for the small backleg servo and the servo driver mount which will be attached it using screws.

Zbrush Modelling Nightmare Funfair

Since my Zbrush was cracked, it crash rather often and I did the same thing over and over if I forgot to save frequently and hence it took quite long to model anything, but well~~

As for the shell… it took so much time and crashes to get it right, because the shell cannot be too thin(unable to print later) and cant be too thick (too heavy) and I cant find a function that could allow me to see the thickness (like reduce opacity of the material to see through), so everytime to check the thickness, I have to export into STL, and import into Tinkercad to check the thickness.

Everytime I made major changes to the shell, I’ll have to import to Tinkercad to check the thickness and shape.

The model appear

Nothing is more satisfying in removing the support in one whole chunk(I did alot of cutting before this video so I can pluck out in one piece.)

this is the liner slider an an aluminium rod for the head system.

Printing for the base shell:

Printing for the top shell:

And then the finishing(PART1)of the model.


The Final mechanism of the turtle

testing out of the head slider and to mark the length I need it to move and cut.

The mechanism to slide the head that I will use after cutting the rod to almost the size I need.

this is the final mechanism for the head after I printed a small piece to prevent the wire from tilting too much when pushing the rod.

the head could be push and pull out nicely even before adding the string to control the tilt of the head.

metal rod were epoxyed into the head to tie the elastic thread to control the tile of the head.

and a elastic thread was added to counter the tension created by the elastic thread to turn the head

And then the finishing(PART2)of the model.

The final test of the turtle before I finally started coding it. The head uses elastic thread because it will go forward and retract, since I dont want anything to be loose and might interfere with the shoulder servo motor, I decided to use elastic thread so that the thread wont be loose when the head is retracted.

This is the almost completed sequence of action,the turtle’s movement is quite restricted due to the shell and the back leg is unable to push the turtle forward because of the shell’s restriction as well as the weaker servo(MG90S) which is responsible for the forward and backward thrust, while the MG966R is strong enough to lift the turtle up, so the turtle could do movements up and down but not walk.

After this, I added a few lines of code to substitute the button pressed with sound sensor to make the turtle more intractable and a few more actions that made it more timid looking (head peeking out to check the environment before the whole body come out.)

In conclusion:

Overall, I really liked this project although I kind of regretted trying to make a turtle during the process because the shell is giving a lot of layout and movability problem made me kept thinking that if I were to make something without shell, it will be so much easier, but the reason that I first wanted to make a turtle is the best is because the shell naturally hide all components that break the user’s perception that this is a robot and a turtle is my spiritual animal, now that I finished the project, I am really glad that I sticked to my initial idea of making a turtle and persevere through all the problems I faced(mainly hardware and mechanical problems which I changed my system and design so many times.).

This was what made into my final turtle with many component being edited into the next version.

With more time, I am sure that I could code the turtle be able to be scare back into the shell by another sudden loud noise – I tried to change the delay in my code to Millis multiple times and it do every action in the sequence twice so I sticked to using delay for now, which disabled me to write the activation code for using the sound as a trigger since it is in a delay loop. But, it looked really nice for now even if there is only one sequence of movement and I am glad that people were thinking that the turtle is cute when they saw it.


Telepathic Stroll Final Project with SuHwee, Makoto and Bao

I really like the outcome of our Final Project very much.

This is the link for our video wall, Video Wall on Third Space Network

This is a Screen Record of the Video Wall on Third space Network.


And This is What I did before the wall was made in Premiere Pro to get a general feel of what it will look like and to get the exact timing to start each video in the video wall.


Lastly, this was my Individual Facebook broadcast, Just to re-emphasis on the point, what made our project really interesting was the video wall, every individual Broadcast doesn’t seemed like much, but when we are linked with the rest, we will produce a piece that is much more than itself.

Posted by ZiFeng Ong on Friday, 10 November 2017

we are the third space and first space performers.

The main purpose of our project planning was to begin with the end in mind – To present the four broadcast in a video wall that will be interesting for the viewer to watch, we will be the third space performers by having our broadcasts interact with each other in the third space. Our final presentation in class will be a live performance and I am not taking about the fact that everyone will watch what WAS BROADCASTED LIVE, nor all the audience in class will be watching it in THE PRESENT, but as the timing to click start each of the video must be precise to the miliseconds and will not be replicable when anyone play it again by themselves later as Telepathic Stroll will look vastly different if the time to click play each video are slightly different, so we are making a Live Performance by playing the four video in a way that we think it is for the optimum viewing sensation.

how did we perfected our timing even when not being able to see each other?

This is the reveal of our broadcasting secret! GET READY! CLEAN YOUR EARS(EYES) AND BE PREPARED! We called our system the….*Insert drumroll in your mind*


so… what is the Master Controller?

Basically, it is a well crafted sound track that is 23 minutes which consist of drum beats that worked like a metronome for syncing our actions and careful instructions to tell each performer what to do at which exact moment and every performer has a personalized sound track because we are doing different task at the same time.

How was it made?
It was made in Premiere Pro and a web based-Text to Speech reader and I screen recorded when it was reading to extract the sound from it into Premiere pro. The whole process to create the soundtracks took more than 18 hours and I’ve lost count of the time afterwards.


So how does it works exactly?
On the basic Idea, we will have the instructions preparing us and telling us what to do like “Next will be half face on right screen. Half face on right screen in, 3, 2, 1, DING!” and we will have our actions/change of camera view being executed on the “DING!” so that all our action will synchronize.

It started when we were at the meeting point where we started the sound track at the same time, counting down to 7 minute to walk to our respective starting point and wait for broadcasting to start, first, I will start broadcasting and then Makoto, SuHwee and then Bao on a 5 second interval, this was done so to allow us to have control when we play the four broadcast in the class to achieve maximum synchronization, if we started the broadcast at the same exact time, we could not click play on all four video in class which will result in de-synchronization. After started the Broadcast, we filmed the environment which was to remove the difference the timing from 30 seconds to 15 seconds depending on the time we started the broadcast and have everything happening afterwards at the exact same time.

Afterwards, we will have different actions at the same time, like when Bao and me entered the screen, Makoto and Suhwee will be pointing at us, this was done by giving different set of instruction in the Master Command at the same time, since each of us could only hear our own track and there will not be confusions among individual and clear instructions were given although I was confused for countless of times during the production of the Master Command because I have to make every command clear and the timing perfect, and it includes countdown on certain actions but not those repeated actions. The hardest part to produce was the Scissor-Paper-Stone part in the broad cast as everyone was doing different actions at the same time

at the end of the scissor paper stone segment, we all synchronized to the same action of the paper and Bao and I were counting on 5, so we all were showing our palm.

Towards the end of the Broadcasting, many of our soundtrack was specially instructing us to pass the phone to a specific person like Bao will have it saying “Pass the phone to ZiFeng. 3, 2, 1, DING!” and “Swap phone with Makoto. 3, 2, 1, DING!” this was done just to avoid confusion during our broadcasting rather than going “pass phone to the left” which is quite ambiguous.

Overall, there was a lot of planning and every details must be thought out carefully when I was making the tracks because every small mistake will affect our final presentation to not as good as it should be, I am lucky that I only made one mistake in the Master Command which was the direction where SuHwee and Makoto will be playing their Scissor Paper Stone with and we clarify it before our final broadcasting.

On the actual day of our broadcast.

The whole week will be raining according to the weather forecast so we did our final broadcast in the rain, luckily for us, the rain wasn’t too heavy and we could just film it in the drizzle. We started our broadcasting dry, and we ended our broadcasting wet.

We explored Botanical Gardens for a bit and decide the path each of us will walk and we walked it 3 times before the actual broadcast – the first time we walk when deciding where and how it will be like, second when we walked back, third was right before we started the broadcast as we walked 7 minute to our starting location so our first 7 minute into the broadcast will be us walking back the same path in the same timing.

we did a latency test by broadcasting a timer for two minute right before the broadcast and we could make some minor changes in our timing if there are some latency issue by calibrating the individual Master Controller to the latency before hand, but luckily for us, non of us were lagging and we had the best connection possible so there was no need to re-calibrate the Master Controller. Also, just to mention, Since Bao and I were having calibrated connection due to the previous Telematic Stroll (NOT Telepathic Stroll), he doesn’t have to calibrate with us again since I am doing it, so we filmed his phone ‘s timer.

Some recap of Telepathic Stroll:


our project inspirations.

Telepathic Stroll was highly influenced by BOLD3RRR and our lesson on Adobe Connect.

On the First glance, one can see the similarity in the association in Adobe Connect lesson with Telepathic Stroll, we had been pointing to the other broadcast, merging faces and trying to (act) interact with each others in the video wall just like the exercise we did during the Adobe Connect lesson,

This was the seed for our project, to live in the third space and interact with others and have a performance by doing things that is only possible in third space like joining body parts and point at things what seemed to be on the same space when its actually not in the first space.

In our discussion before the final idea, we had many different good ideas that was inspired from music videos like this:

In the end, we only used minimal passing of object (our face) in Telepathic Stroll and we grew our idea from a Magic trick kind of performance to an artistic kind of performance.

I feel really weird to have many of my project in this semester being inspired by BOLD3RRR, not in a sense that the style or the appearance or even the presentation, but in the sense of the preparation and the extensively planning before the Live broadcast as I always liked good planning that leads to good execution which I am really inspired by BOLD3RRR in this, especially in using pre-recorded track and incorporate it into a live broadcast to make it blends into the live aspect of it. This time, instead of using pre-recorded tracks and image over the broadcast like in our drunken piece (which was also highly influenced by BOLD3RRR), we evolved from this idea to using a pre-recorded track in the background for us to sync all of the movement even when not being able to see/hear from each other.

In our multiple occasions of class assignment which we went Live on Facebook, we figured out many limitations of Broadcasting

  1. If you go Live, you can’t watch other’s Live unless you have another device to do so and if you can’t watch others live while broadcasting, two way communication is relatively impossible.
  2. even if we have another device to watch, there will be minimum of 7 seconds delay.
  3. If you co-Broadcast, you can see the others in Live, but if we are doing a performance and yet we discussed through the co-Broadcast, since we can see each other and the viewer could also see it and this is not the effect we want in our final project.
  4. Co-Broadcast will have the quality dropped.
  5. There will be possible lagging in the connection that will cause the “!” during live and skips in the recorded playback. This must be overcome by all means.

This is why, doing individual broadcast with careful planning and our Master Command system will overcome all of the limitation we faced and with the calibration before the final broadcast, most of the problem were solved.

our idea of Social Broadcasting

In Telepathic Stroll, we are trying to present the idea of social broadcasting in a way that is like the real world social – the collective social,

In our individual broadcast, we could not know what the other was doing nor feeling, yet when we are placed together in a location(both the physical and third space), we could interact with each other and do amazing thing that individual can’t do. In Telepathic Stroll, even when just doing our own task without knowing the status of each other, by doing these individual task collaboratively, We united as a group forming as a single unit, every member is as important as all of us and if any of us were removed, the whole structure will collapse into a Problematic Stroll, nothing more than just an individual broadcast

If that wasn’t Social, what is?

Team Telapathic Stroll: Signing out.



Interactive Devices Final Project: Obsoleting Instruments Process 2.

After the last post where I’m almost done with the laser reader and there are still improvements to be made to it, but I will focus on other component and will come back to make it sound nicer if I still have time, for now, I need to slowly build towards the completion of Obsoleting Instruments first… I will need to a way to feed the music sheet into the laser at a constant speed, so after some research and thinking, I could use DC motors with gear box that drives the music sheet through a belt

system, so, with that in mind, I purchased some special rubberbands and the belt driver thing, as well as some gears and shafts and the plastic DIY construction pieces to test it out.

First, Since I’ve got the Landline Telephone, i need to clear the little plastic pieces inside to make room for all my components, and I thought it would be a simple task, but MANNNNNN ITS SOOOO INTENSIVE(because i dont have a proper tool to do it)

Theres a Chinese saying “Small knife cuts huge tree” its a torture and yet quite satisfying for me.

its like the loding screen, I can see the final point when doing it, so the hole need to go 1 round. and my finger hurts.


This thing tortured me, I tortured it back. Fair Game.

Slowly, i manage to cut all protruding parts except for the two longest one as i still need to close the top back and secure it.

and then put the belt and the belt wheel to check the distance required to have the just right tension as too tight it will not spin well and too loose it will come off easily.

I tried placing then together so they could sandwich the music sheet, but this is too near and it wont work well

and if I use a gear system, I could do more of the thing, but still need to consider about it.

After I got the usable distance, I proceed to 3D modeling the parts that is required.

The initial concept of the belt system. I know its abit overkill but.. I dont know where will it fail so I decided to give the music sheet full support, also, this was measured according to the belt length, Motor dimensions, phone dimensions, laser dimensions and the music sheet dimension to make sure they all fits. The complex base plate is to enable me to insert the belt after assembly.




I placed some holes to save some material and reduce printing time.

Then split the base into two part so that I can work on half and combine them later, I think it woule be much easier for me this way.. (And I dont have to print 6 hours just for the base.


It worked really well, but there is only one problem, my top cover of the phone could not close as it is slanted, so I have to shift the whole plate to the back, or place the top belt about 40% inwards but I cant just shift both belt wheel as there will be the laser reader there. so…

I will settle on my 6th one for now, since this is only half of the system, I need to get my music sheet up first for me to test the speed of the motor with this system since if half doesn’t work, nothing can work.


Pew Pew PEW. look at dem laZEERRRRR


I supposed it will work if i attach the motors to it. will upload another post once I’ve Progressed more.