In the Tree Project, we use PIR/ultrasonic sensors that is embedded with codes inside our project. The code contains instructions on what the sensor should follow when there’s infrared detected when a person walks by. With different variations of what PIR will pick up, the timing of when it picks up senses cannot be predicted. However, PIR’s capture and follow-ups to do is predicted – and what happens is (when movement detected) is the sensor channel energy to the servo motor and then the attached hardware ( tree branch) will move. The speed of the movement of the servo motor is also determined by how PIR captures it.
As written in book, media elements such as images,sounds,shapes,behaviors, are represented as collections of discrete samples like pixels,polygons,characters). They are put together to form large scale objects. However, they do not change their identity and function. In Tree Project, the media elements include the use of servo motors which functions as an object that moves 180 degrees left and right. The way they work is by extending the branch outwards when it is being ‘activated’ ( activated as in there is a sensor before it which is the PIR sensor, this cause servo to move). In our project there will be more than 5 servo motors functioning at different timings, and they play an important role in producing the interactive elements in our project. At the same time, they function independantly and they are only dependant in terms of whether to move or not to move. When one servo is removed, the other remains functional.
There’s 2 types of automation – one is normal and one is high-level automation. The former creates a “media object” following a simple set of algorithms that produces whatever the user wants.
Some examples of automated media creation include photo camera, film camera, film recorder and the ones we used the most – our smartphones. Our phones contain , the recorders, film recorders and photocamera, which are examples of the previous principle – which are the new media forms which represent modularity as above. Our smartphones are automated in such as way where items or categories are labelled in the interface following the language we understand and speak of. And this allows the users to access and retrieve information easily.
Also, Nadine the robot from NTU, has high-automation embedded inside. The computer presents itself to a user as an animated talking character.
Nadine’s facial features were added with some mechanisms that allows her to ‘express’ subtle facial expressions.Her facial expressions go hand in hand with her verbal replies as well. I think this dual function (verbal + non verbal) would require a high-level automation, because the stimuli she receives will need to be ‘understood’ by the computer inside of her.
Another simple example is the use of Bots in customer-service lines in commercial stores – online and offline. Even though there is a “Virtual Assistant” attending to users’ needs, it can understand to a limited extent. This is because it is unlike real-life retail assistants in stores, the computer may not be able to use algorithms to cater to certain demands which are complex and unpredictable ( detecting and attending to demands of angry but important customer, following up with regular customers). It can offer simple and direct services.
As we bring this into the Tree Project, the automation would be the use of PIR sensors which take in some infrared light from the movement of a person, and it channels this piece of information into the servo motor, and then the branches move accordingly. I think this PIR-servo motor relationship has been mentioned several times in my essay already, so I guess this show how significant this mechanism is in the bigger picture.
At the same time, there is a similarity between Nadine and our PIR-servo motor mechanism – and that is in delayed responses. Nadine’s response to a question appear delayed, like how the servo motor’s movement response is delayed when it received a stimuli from a moving person. I think this part can be controlled in terms of how fast the response can be but the speed of reaction (in Tree) is debatable as to whether we respond faster, or computers do.
In point 4, the author mentions “branching type interactivity, which means the programs in which all the possible objects the user can visit, form a branching tree structure. ” Besides the phrases reminding us of the Tree project we are working in, it is trying to show the different choices given when a user interacts with the interface. This is similar to using a touchscreen phone where we go to Settings page, and out comes a lit of sub-branches where we can select and make adjustments to brightness, sound and the volume of ringtone when there’s incoming phonecall etc etc. In relation to our Tree Project, there is a hidden circuit which interacts with movement of person, and at a particular spot where a person’s foot is detected, sound is produced. At another particular spot, another sound is produced. This allows users to know their choices they make can produce an interactive effect – in the form of playing music. The reason why we incorporated sounds in our Tree, helps the user to be conscious of how her walking feet near the tree can produce an effect.
Transcoding is what happens after we have computerized new media. As mentioned in the readng ‘to transcode’ something is to translate it into another format. When we link transcoding to the Tree Project, it may be a bit irrelevant. This is because the use of transcoding is not present in our mechanisms or the hardware built in to make our tree.