Documentation Part 1

Safety is the top concern in this project due to the candy will be hanging at the top. I did some research in the method of the rope and also Fabian told me about a method clove hitch knot that can be self-tighten.

I did a 2 time trial with the weights and uses 2 types of knots, clove hitch and figure of 8. Figure of 8 are used by rock climbers to tie on to their harness which I feel it will be helpful and secure my project.

After settled all the knots, 1 pulley are tied to the railing to create this simple pulley system. However, the test out wasn’t the effect I thought of because the lowing speed was really fast when short amount of rope are being pull, this will make my dangling candy shorter.

So moving on to second trial was using compound pulley.

This method make it slower but as for my height it still reachable so I need to add on of more pulley to lengthen the distance. At first the second pulley was place near to the start of the rope, it make the pulley system really hard to move. So it was shifted near to the centre afterwards and the lengthen helps to slow down the whole process. More ropes are needed to lower the weight. I also added twin pulley instead of the separated pulley 1 and 3, it helps to reduce the friction between the ropes.

In reality
In diagram
The final setting with twin pulley

As for the frame for the candies, it was made up of 4 metal wire frame from diaso. It was combine together by cable tie and also the wooden stick to stop the bending. Strings are tie to the wire frame to attached the candies later on.

Final form with the candies!

Documentation Video:

Final Project: Process Documentation

I started building Bob by using cardboard to test its movement. This shape turns out pretty good and it works well as an early prototype. At first one cardboard was enough to make it move but after a few times due to the turning the border of the cardboard was worn off.

So additional support was added to both side of the wheel that will make the border of the hole stronger. The square cardboard was to support the motor when its moving around.

Next I move on to making the sensor sense and the LED will turn on. After that combining them together making the sensor trigger the motor to run!

Nevertheless, I face some power shortage when I added another DC motor. So I added the motor shield to drive the two motor.

I also laser cut the acrylic to replay my previous prototype material. The box was to house the second DC motor and to support the main walking DC motor!

So in this video, the second DC motor act as a turning motor for the whole devices. There is a problem of the surface was too smooth for the acrylic to move, so I added some glue one the wheels to increase friction. Next I continue to work on the coding where I face problem with.

  • When the button press tuning motor is trigger but the main motor continues to turn. I solve this problem to put under “if” statement under the main motor.
  • The delay in my code cause the button not to detect anything.
    Replacing delay with millies so it counts the time with maths during the timing I wanted. So it doesn’t obstruct the detection for buttons.
  • The sensor keep sensing even when the mouse is moving so in turn it keep moving continuously. Using delay to block out the voltage for sensor in order to turn it off after sensing something.
  • Wiring for the mouse.
    There are many wires needed and it actually was too heavy for the mouse to move for the final product. Probably in future development, wires can change to a thinner version and uses the wire tube cover to secure them together.
  • Actual mouse might be too small to house all my motors.
    Initially, I wanted to house them all inside a real mouse until I found out that the motors added together was too big. So I downloaded a mouse model online, edited some areas and enlarge it but still within hand size.

In conclusion, there are many problem which helps me learn more in this process. Also there is still room for improvement but this project allows me to see that I need to put my focus on small details as well. Such as my button that was place in front to trigger the wheel should not place together with the wire. As a result it kind of block the button that will effect the overall structure.

Final Project: Process Documentation

In this project, I have explored stepper motor using motor shield. This motor shield is a similar one with ada fruit v2 that can power up 2 stepper motor or 4 servo motor.

Initially the 12V external power was connect though the Arduino port, however the Arduino and motor shield get really hot after it runs for 5 mins. There was once it even become smoky and I quickly stops the power.

After reading online, a safer way was to power the motor directly from the motor shield board. As for my motor, each of them needed 1.65Am and 12V but adding two motor together I would need 3.3Am.

I bought this at Sim Lim for the external power adaptor! After a few rounds of trial and error my motors still did not work properly and I realise that the motor shield only provides maximum 3Am for the motors.

However, I tried to wire up the installation in order to calibrate the movement and try if the motor are able to move up the umbrella.

The installation too longer then I expected as there were many problem during this process. Considering the length of the wiring  and where to position the motors.

 

The motor creates loud vibrating sound when I did not put any sponge below. After adding sponge it helps to reduce the friction between the metal and motor but still there is some noise.

In this video it shows the problem with my motor where there is a 1 sec interval. The motor move very slowly and it is always jerking. It get better after I changed the code but it still move very slowly. Unfortunately and fortunately I short circuit and got my motor shield burn. I have to go back to my first option of servo motor.


Surprisingly, servo motor works well and able to hold my umbrella! After setting it up and calibrating the sequence is shown in the above video. When calibrating the code, I did it in step by step. I mark its initial position on the floor and start to code from there. Every time I upload a position, I would blank out the previous one and re-upload again.

The delay helps to tell the motor about the duration and it will continue spinning until what the delay stated. After the whole process, there is also a return code for the umbrella to return to initial position.

I also try it with the detection, however the shadow of the umbrella would block what is intended so, I decided to go without the detection. Moreover in this trial run the spot light did not help the umbrella to stand out but blended into the background more. After that I tried offing the lights and put an spot light LED in to the umbrella for glowing effect which I feel it work very well!

In over experience, although what I have did for the stepper motor might not been used for the final construction but I have learnt a lot from it. This project gives me a strong sense of satisfaction after I have overcome many obstacles and burn 2 Arduino. I am really glad that everything still when well! 🙂

Device of the Week #04 – Pinokio

 

Pinokio are a very interesting device, animatronics that likes attention from human. It uses 6 servos motor for its movements, a webcam to track people. It uses  Arduino and processing to generate all the interaction and behaviours.  One of the most interesting behaviours is that Pinokio doesn’t like being turn off, when user turn it off it will trigger the button again to turn itself on! I feel its a very interesting aspect.

The pros is that it has interesting behaviours and it was intriguing to watch even though this is a simple device. However, after awhile it can be bored watching it as it only have a few reactions.

In the video they say it can recognise sounds and like attentions, probably the device can added some interesting reaction such as a dancing lamp or something to create more variation.  Or adding wheels to it to follow human around so improve its immobility. Or even adding use to it such as voice command or clap to turn on the lights so to put it self in a more useful position.

Internet A & Culture final project – Telepathic Stroll

Project Summary

We wanted to explore the connections between each live stream and the fun possibility that we can create! Since our final project will be presented on the grid wall, we decided to build our whole concept revolving the idea of the grid wall and the potential of viewing all four live streams at the same time to create a piece much greater than each individual stream. Exploring the idea of third space and synchronising one and other. It involves interaction, cross streams, planning, coordination and a lot of teamwork in this project.

We start off at different location and moving to meet each other at the same pace where we play with the visual effects of filming. From synchronised games, face connection to various streaming ideas are thoughtfully planned, hope to explore different ways of live streaming.

 

In this project, I would say our group are very on to all the experimental ideas! We work very well together as a group and each of us would take up some portion to even up the workload. Every meeting was a fruitful one where we came together to settle our main directions and what we should do next. Ideas are also build upon each other which I feel this project is a mixture and combination of one another’s ideas. Yet, we still have a key roles in our group, such as Zi Feng is very practical so he would set up the timeline, Makoto is very resourceful he is in charge of the scouting and checking the weather forecast, Bao is a more hands on person he is in charge of the flow of action. As for myself I am in change on the coordination such as taking notes down for our flow of ideas so our group has a clearer image of what are being discussed and what are the ideas that cannot work. I would also help to source for material that is needed in the broadcast and prepare them before our live stream.

When we was building our ideas on final project, there was once did not meet the requirement of the live streaming together. We actually panic for awhile and we went outside to discuss about new ideas. In the mist of discussion we all just throw out our ideas and I was having this idea of taking different public transport such as using MRT, Buses, cycle and walking to meet at a point. After this was raise up, more ideas begin to flow in like Bao suggested streaming panorama when we are meeting, Makoto suggested the combination of the faces and Zi Feng suggested the swapping of devices.  We wanted to explore what we can do with social broadcasting and the visual effect that we can create.

Our final project are inspire by main 2 artist and 1 video. First was our adobe connect experience where we tried to make our hands connected to each other. The visually was amusing and fun to look at!

credit: Ong Zi Feng

While next is the public communication art “Hole in space”. I really appreciate the live performance that involved two ways interaction at different locations. Lastly was this music video from our research:

We are inspire by the transition effect then how it can link up the third space together visually. Therefore, we are curious about how we can still interact at different location but still connected in the third space!

With the paste live broadcast experience, we also learn that rehearsal is important in this very complex final project. My previous post talk about our rehearsal in ADM.

We also fix a time to sit down and talk about the flow again so to make sure everybody is on the same page and knows what to do at the correct time. Our key to achieve the synchronise live broadcast was the rhythmic tempo that Zi Feng created and we will walk with the tempo with instruction provided. However we are afraid that there is different frame per second that will result a faster or slower video.

Beforehand we tested the live streaming for about 2 min and play it together at the same time to see if there is major differences. Bao and Zi Feng already done it with their telematic stroll and from this rehearsal we can conclude that there is no major difference.

We started off at a different location at Botanic Garden and start to en route  towards the meeting point. Although we have plans but there is still unpredictable situations which reminds me of what Alise Iborg from Second Front said:

Also, many times, it’s the surprises and unintended actions that make the work really come alive!

The weather forecast for the week are mostly raining days. In the morning of the broadcasting day was sunny and suddenly being to rain in the afternoon. We decided to go with it with the wet weather and even you can hear the raining background sound playing.  Which I find it that very interesting when the rain was not clearly shown in the stream. The unpredictable situations makes the live broadcast even more real and alive!

 

We also tired live scissor paper stone, Zi Feng and Bao was counting the score.

The exploration of combining our faces while not knowing what our friends ar broadcasting was a fun and intriguing experience.

We also take advantage of the multiple live stream to play we perspective. Changing the point of view and also reflective streaming creates a broader view for the viewers.

Another interesting factor in our live broadcast was the panorama live streaming! We are probably the first ever live combination of panorama live streaming in the world!!! 🙂

In conclusion, this project turn out fruitful as our team mates did well in our individual roles! I have learn a lot during this process and about the possibility that would happen during the actual life stream should letting the nature takes it course that reflect the realness in live streaming. There is many approach we can take to explore the concept of social broadcasting. With our project “Telematic Stroll” explore the area where all of us start streaming at different location while synchronise together as a whole. This live streaming performance was to be seen in a piece to enjoy the whole visual experience.

Cross Stream Broadcast – Actual

When I was the mobile streamer

ADM SAFARI

Internet Art and Culture Class

Posted by Nicholas Makoto on Thursday, 21 September 2017

Nicholas Makoto our safari theme live stream

Posted by Su Hwee Lim on Thursday, 21 September 2017

 

Cross stream broadcast was a collaboration with my classmate Makoto. We wanted to explore the possibility of Facebook Live using filters that can help to enhance our ADM safari’s idea. While combining with OBS, such as overlaying the animal gif and small photos to further emphasis on the theme.

We also designed the interface in OBS to make it looks like nature documentaries with showing unique information about each specie such as rabbits in ADM likes to eat KFC coleslaw or ADM student love cup noodles. Those information are plan to be nonsensical but some times is the facts of ADM student life.

On the actual cross streaming, we was lucky that there are still people around in school to appear in our streaming. I was quite overwhelmed and surprise by their respond as this was a unplanned streaming, they all reacted very well. Additionally, some even start to talk naturally along with the safari theme or even wanted to find a english name through the broadcast! From there we can see how technology are highly involved in our lives through their reaction to live broadcasting. Moreover, because there is a mutual trust between my friends and I that makes them willing to stream with me else they might think that I am a crazy person.

The filters actually created a lot interest in them that helps me to start a conversation. They are also interested in the other filters while we all play along with it. Similarly, it reminded me how I always play with the snapchat filters that just look amusing and fascinating.

The added filters created a funny or silly videos that can videoed by anybody and can be relate by everybody. It has become a culture in youths to video down their daily life that sometimes can brighten up each other day.

Another experience I had was some of my friends are watching me going live! Some they commented on the live which I did interact with them. The feeling actually feel kind of weird because I was interacting with them but I cannot see them. It feels a bit uncomfortable as there was no assurance or feedback that they can hear me. The only way was by keep commenting on Facebook.

 

When Makoto is the mobile streamer

Posted by Su Hwee Lim on Thursday, 21 September 2017

https://www.facebook.com/nicholas.makoto/videos/10155805791692698/

When it was my turn to control OBS, I was actually quite nervous. There are many things need to take note before we go on live stream. We would double check everything before we are ready to go live! At first I was panicking with the layers and captions as I need to be quick to react on which filter did Makoto chosen and show the layers accordingly.

After awhile I also found out using the on and offing the layers can make the words appearing flashing to help to inform the viewers of what we are doing. Therefore, just in case there is someone who just open up our live stream have higher chances of staying until they decide to move on.

In overall experience was quite stressful to me as we are not sure what we will face during our journey around ADM safari. Yet I do really enjoy process as I get to try out both screen broadcasting and mobile broadcasting. Mobile broadcasting was more interesting to me because I get to interact with people which was once my biggest worries they would reject being on live. However from the live experience, they become my biggest saviours as they are really open about getting on live with me. Screen broadcasting was quite stressful to me where some impromptu changes must be made in seconds. I feel Makoto has done a really great job, he is fast in typing and able to adapt to changes from my live streaming! The cross stream broadcasting not only deepen my understanding of what cross streams can achieve and also interesting how we human are comfortable in front of camera.

The cross stream can be viewed at:
https://thirdspacenetwork.com/cross-stream-makoto-su-hwee/

Hyperessay #1: Concepts in Social Broadcasting

The past 6 weeks was my first few experience to try and learn about social broadcasting which something I would never try in my life. I appreciate what we are taught were put into use for our micro projects which really work for hands on person like me. From those projects, helps me to understand the concepts better as well as myself.

From the first Real Time Aggregation project brought interest in me as I found out how camera would brings human relationship closer.

The language of media is everybody’s language.

I feel no matter you are watching’s people live steaming or with the person who is live streaming, it breaks down the barrier we had. At a very least we would gain more impression about that person and that makes a step closer between you and the person who is doing the streaming. Also, I felt the reason why we get friendlier when we are live streaming was due to our upbringing concept of being in front of camera you must smile. There is a natural reaction that we portray when we face the camera.

After knowing Videofreex, I admired how they explore different ways to film which becoming the first to live broadcasting. They inspired me to explore ways to live stream my Video double just like they build the system to makes live broadcast possible.

So, in my video double I added handmade logos to make everything more personal and real which I felt it fits the theme. In the video they also mention about:

It is important to go there where history are being made

This also encouraged me to live the moment while I was broadcasting for my projects because we are also making our own history! So just enjoy the process and everything will be fine.

Each live we stream are also unique where things or incident cannot be undo or repeated which makes it more precious. Especially in live streams there are things are unpredictable in fact out of our control.

I was stuck somewhere in between when I try to make a perfect roll on ground. The sound was even recorded. lol

Just like how it happen during my video double and Desktop Mise-en-Scene. Although there are flaws, but it portray a more natural real live instead of a staged sequence. The research for Jon Cates brought this realisation further. He focus on the raw imperfection of glitch that was consider breaking the rule to creates a new aesthetic.

…openUp possibilities, potentialities for ppl, as well as for myself

Indeed his glitch form of art reflect on the current society where we are obsessed with perfection. What is wrong with imperfection right? Also, we become too occupied and serious that we forget to play and explore the possibility. Usually we tend to stay on the safe side but dare not to do things that are out of the norm. Perhaps we are conscious of how people would look at us, especially during live streaming. However from all the artist we have came across during this module, they believe in themselves and their ideas to create unconventional art piece. In which motivated me to explore more incorporate what was interesting to me to the work I created. Likewise, is more important to look into ourselves instead of what others view us and enjoy the process of broadcasting.

Just nice that we had the OBS project where I could incorporate what I had learn, explore and play! In OBS there is a lot interesting effects and function that interest me such as overlay effect and sliding movement which I selected was my favourite.

I also like how Jon Cates created Bold3RRR in the constant changing images that would keep me curious so I added the similar object in my Desktop Mise-en-Scene to keep my viewer interested. I also found a way to slowly trust my ideas better. Starting from exploring, giving myself many options and exposure. Then I would select and remain certain things that helps me to narrow down which give me a clearer direction of what I want to go for.

ADM SAFARI

Internet Art and Culture Class

Posted by Nicholas Makoto on Thursday, 21 September 2017

In the cross stream broadcasting project, Makoto and I wanted to explore the transition effect, filter effect combining with OBS layers. The videos are live stream simultaneously and with live captions just like a documentary series but a LIVE version.

The live captions.
https://thirdspacenetwork.com/cross-stream-makoto-su-hwee/

When both video was played at the same time side by side, the visual feels like an unfinished art while another was a polished version. I also notice my eye will be sway between the both videos. While broadcasting, the interactions with my friends are also easier as introducing them the filters and projects help me to break the ice with them. Some really surprise me as I don’t really know them and they even says things about safari to contribute to our theme!

In current society as technology has advance people are more open about being in front of camera live as compare to how people takes time to react in the project “Hole in space”. Being on screen has become part of our lives whether in the real, virtual or third space. Nevertheless, many people are familiar with the technology just by using is as a normal camera but yet there is rooms for exploration such as the filters effect and OBS.

There is also come interaction between my friends who they commented on the videos. Which makes me feel that even though we don’t contact frequently they still can know and participate in my live stream.

In conclusion, social broadcasting are a platform for us to communicate to the world or certain targeted audience through different mediums. Where it creates more interaction between humans and it can communicate emotions, providing us with accessibility to our friends whom we have not meet for decades.

When we can no longer separate the real and the virtual (the post real), when the third space is just the way things are, well, that in sum is the current state of evolution.

We are connected through social media or even the third space just like what we did from adobe connect. With collaborative streaming such as my cross stream with Makoto can be added with effects for instance, live caption helped to enhance the impact to the viewers. Furthermore, the biggest advantage of social broadcasting is it allows two ways communication when my friend can communicate to me using their keyboards. Thus, regardless of location, culture, time differences we are connect through this third space.

The cross streaming can be viewed at:
https://thirdspacenetwork.com/cross-stream-makoto-su-hwee/

Cross Stream Broadcast Technical Test

Idealisation

Due to personal reason I did not attend the last lesson which the first trail of streaming. This makes me more nervous on what I suppose to look out for during this cross stream but luckily it is a technical test! After brainstorming with Makoto, we came up a few ideas such as attaching our phone at different angle and news reporting. So to explore the different angle of viewer, for example attaching at our toe as it is a totally different view from what we usually see. Or collective visual connecting our face together visually. However all those did not have a concrete theme for us to play/explore with. We arrive at the final concept of Safari adventure while playing with the filters. We wanted to do something that make use of the interesting filters.

Linking that up with the safari idea, we decide to explore ADM with the people we meet with the filters! So it will be like searching of the “wild animals in ADM NTU”. It is like how documentary they film about animal. We found rabbits, wolf and also cat filters that would help us to portray the idea. In OBS, we also added filters such as the logo and titles to mimic how we would see on the tv.

Process in streaming

The internet connection was really quite bad some areas might not able to connect to internet especially when I walk out of IM room there is a short period of time it was disconnected. I might be just change to my mobile data for the actual streaming to prevent the break off. I feel it would interrupt the process and might miss some moments.

We also face sound difficulty as there is a feedback coming out from Makoto side. After we went for the second technical test just for the sound he manage to solve the problem which end up coming from his live streaming. After offing the volume for that the feedback seems did not appearing anymore.

While going out to stream, I actually have quite a blank mind again even after a few times of streaming experience. Yet, keeping my ideas in my mind of finding the “wild animals” in school. I have plan to prepare which filter to use next when I saw people I know yet I will still panic a bit when I was switching around the filters. However, the unplanned situation makes the streaming unexpected and impromptu. Maybe some overlay effect can be adding on OBS to make the whole live stream more interesting. Probably some moving image and shift of camera angle like dip to black just like switching scene would have a smoother flow.

The first technical test:

This is a trial video

Posted by Su Hwee Lim on Monday, 18 September 2017

Second sound test:

https://www.facebook.com/suhwee/videos/10155082554878403/

The outcome after combining together:

https://www.facebook.com/nicholas.makoto/videos/10155799366832698/

The outcome on sound test:

TEST RUN FOR OSS ASSIGNMENT

Posted by Nicholas Makoto on Monday, 18 September 2017

Desktop Mise-en-Scene – OBS Experience

Video: https://www.facebook.com/suhwee/videos/10155065521923403/

The OBS live stream is something that I really like to play with. There is filters and effect when combing them together it can create bizarre and interesting visuals. Before the streaming, I when online to search to learn more about this programme. I came across this youtube video with green screen effect.

I think it was really interesting those effects he has create. He duplicate the image capture and uses chrome key to select green to blind it so he was exactly looks like in the third space together with his screen. I apply the chrome key as well in my live stream on the moving letters. The image was to give a more digitalise feeling. Display capture are also another interesting tools that I found while exploring OBS. It allows you to capture anything on your screen such as your menu, time and date and mouse.

The mouse really interest me because it will auto change its icon while we are working or browsing. I added because I felt it will give variation to the live steam and highlight the presence of it. The mouse icon was so automated that we don’t really pay attention to.

One of my everyday task was to complete artwork for my client. In my live streaming, I was working on a brochure for a Vegetarian Festival in NewZealand. I though it would be interesting to see how the background process before the poster is completed. You also get to see my to-do list for school work.

During the live stream, at first I do feel some pressure as if there was like whole bunch of people looking at my screen. It was a bit uncomfortable and even I face some difficulty with the sound. There was a feedback from my speaker and suddenly there was someone cutting the grass under my block. It the noise was so loud that affecting my sound projection. The noise makes me panic until I realise I can stop it by offing my speaker. Initially I planned to play my playlist songs while I was doing my work like usually I would do, but it was too lag to do anything else. About half way through I get more comfortable by focusing more on the work. Time flies and I did not realise I actually did up a 25 min live stream Wow!

Video: https://www.facebook.com/suhwee/videos/10155047249408403/

 

In conclusion, the experience with OBS was really fun! I really enjoy using different filters to create the visual. From the first OBS moving images of my desktop, changing of mouse to chrome key effect, I just let my mind run while and do what I though it was interesting and curious about. I really like how the effect how my Desktop Mise-en-Scene turns out especially the mouse! Perhaps next time, I should think about how the sounds is projected too, which this time the sounds was quite bad in the first few mins.

 

RESPONSE: Jan Chipchase – You are what you carry

The reading are more in depth explanation of this Jan Chipchase’s Ted talk whereas now we were given a boarder view. He talk about the fundamental of carrying behaviour, which provides insight into the user activities, values, beliefs and fear. We also strategies where we should be putting our stuff to prevent us from forgetting them. I would always categories my things and place related item together, for example my lip balm and ointment was place at the same small pouch to allow me to find them easier in my school bag. I will also think about how frequent I will be using it and place them in different slot in my bag, so that I can just slip my hand without taking down my bag.

Jan Chipchase also uses examples to describe the term range of distribution. The Shanghai lady given me an insight of their risk of theft is high from the way she react on her shopping trip. In comparison, Singaporean are more relax that we simply place our phone on the coffee shop table where the phone are still visible to us. There are also cases that their phone are being stolen away however, the crime rate are lower as compare in Shanghai.

One of the part where he mention about a point of reflection. I was nodding all the way as it is something I do every time I leave a place. I will do the mental checklist like he said. This widespread behaviour could possibly due to incidents like we have forget our item before. So just to make sure we doesn’t make the same mistake we would take extra caution.

In future, there might be automated systems to predict our shopping habits. I felt it is really possible if most of the company would corporate together to shared the information among them. However, he mention about the way Amazon could do by sending the product to our house might only last for a period of time. Looking in Singapore context, I would imagine I get many different product each week/day that would be kind of irritating. The feeling given is like hard selling, although we can choose not to pick up that item. This also reminds me of how our advertisement on the browser are linked to our browsing history. If you are searching air ticket for the past few day, the advertisement in Facebook or other website would cater to what did you search for.

In conclusion, we would all have reason for the things we carry that are fundamentally tools for our survival. We also develop our own way of carrying behaviour that could be influence from our surroundings or personal reasons. Nonetheless, technology advancement would constantly changing our tool and idea of survival.