Project Soli: Controlling devices using hand gestures
It is a gesture sensing technology by Google ATAP (Advanced Technology And Project group) for human-computer interaction. It uses miniature radar technology with sensors that will keep track of gestures made by human hands and fingers. The sensors track these gestures at high speed with detailed accuracy. It allows a rich variety of interaction. For example, the sliding of thumb would indicate the action of scrolling and tapping of thumb would indicate selecting of objects. Such interactive actions can be relevant in areas such as computer gaming and controlling of household appliances. It has the potential to move the users away from physical controllers and allows them to interact with the technological environment using gestures performed by their hands and fingers.
Pros and Cons of the Device
1. This technology gives people the ability to move away from the use of controllers or any physical gadgets which could lead to many possibilities. It could have a huge impact on areas such as virtual reality (VR) and augmented reality (AR). It will allow users to have a greater immersive experience as they are free from holding onto any physical inputs.
2. The physical representation of the technology is tiny. (See attached photos as reference.) This makes it potentially relevant to be integrated into technological wearable devices such as smartwatch and smartphone. With its small physical size, it can also be embedded into non-wearables devices or even daily household objects.
1. Gestures made by hands and fingers of different individuals are organically different. It will be a challenge for Soli to determine the intentions of the different gestures made by different individuals. It should also take into consideration users with hand and/or finger disability.
2. The experience of using physical mediums such as controllers for interaction had to be around for a long period of time. The implementation of Soli technology will means the absence of physical means in our interaction with the technological environment. As this is still a new developing technology, its effect on the experience of interaction for the users is still unknown and can only be realized in the long run.
Suggestion for alternate use and/or modification
Though Soli is still at an experimental stage of its development, its size and flexibility of the technology offer plenty of possibilities. I believe as time and technology progress, it will potentially be integrated into our technological environment with its favorable interactive characteristics. With ATAP’s plan of offering development kit for developers, it really will be up to the collective effort of ATAP and the developers to help transform the idea into a finalized product and into our home.
This is the video documentation for our final project presentation. We were glad that it invoked responses from the people that were watching the interaction happening between the installation and its participant. Many people were taking videos and photos of the participant’s actions. This was the ideal scenario we wanted to achieve. The interaction between the installation and the participant and another form of interaction between the bystanders and the participants.
Memories of sound is an interactive sound installation that explores the relevance of sound in triggering human memories and experiences. We hope to understand and explore the connection that humans have with sound.
The users will be wearing colored bands on their hands which serve as the trigger points of the interaction. The movements of the colored bands will be detected by an external camera that is linked with the Max 7 patch. It will prompt changes in the audio output supplied to the users through the headphone. The change in the audio will allows the users to experience different sound context. We hope that this will trigger their personal interpretation of the sound available to them.
We tried testing our interactive installation at the concrete wall. Initially we used light sticks as the light source. We took screenshot of the colors of the light stick and got their values on Photoshop. We input the values into Max so it can detect the colors. However, once we were in the open area, the camera cannot detect the light stick. It was because the light source from the open affected the readings. In the end, we used colored paper instead.
It was fun testing out the installation ourselves. Through the testing we determined the optimum distance the person had to stand from the camera in order for patch to work. We also tested out different combinations of sound to understand how it will affects the user trying out the installation. We just hope it will not rain on the day of the final presentation.
The following are the screenshots for the patch we did for our interactive installation.
The following is the video documentation of us trying out the Max patch. We used light sticks and several colored materials to test out the patch.
The main issue for our patch is the webcam not detecting the colors at times. Therefore, it was quite frustrating as we were not sure at times if it was the fault of the patch or the fault of the camera. To put an end to our problem, we borrowed an external webcam that is more reliable, so the detection rate of the colors is more consistent.
For the Lozano-Hemmer’s Shadow Boxes patch, I played around with the values to create different effects for the visual output. I combined a cv.jit.centroids patch with it to add sound effect to the overall patch. The following is the documentation for the combined patch. Enjoy! 😀
For the face tracking patch, I learnt how to map the face of Will Smith on top of my face for visual output. I added the video velocity effect with cv.jit.HSflow to see how it will work out with the face tracking patch. It managed to create some beautiful and interactive effects. The following is the documentation for the combined patch. Enjoy! 😀
For the photo booth tutorial, I learned how to generate sounds of command to position my face for the webcam to take a screenshot. It was a complicated patch, and I had a hard time getting it to work. After getting the patch to work, I substituted the sounds with other sounds to create funny effects. I had fun making the patch and get it to work. The following is my documentations of my patch. Enjoy! 😀
For our final project, we decided to shift away from the idea of creating an interactive installation depicting the experience of walking in a cave in a dark room. We feel that the cave experience is cliche and does not add meaning to our project of using sound to create an experience.
Instead, we decided to experiment using sounds to create a narrative experience for users. For example, the sound of wind can means a storm is approaching, or it can mean a breeze from the sea. If another layer of sound is added to the context, it will become an entirely different meaning. We feel that it will be interesting to see how people react to different sounds and how they feel about it.
Therefore, we decided to experiment with different sounds to create a different context for the users. We will be blind folding the users experiencing our installation so that the impact of sound can be enhanced. The users will also be wearing headphones to prevent distraction from the surrounding sound. This allows them to experience only the sound and nothing else.
After looking around, we decided to use the space at the concrete wall in Adm for our installation. The uniform color wall will serve as an excellent background for our color tracking sensor of our Max patch. The open space will also invoke interests from passer-by when the users are interacting with the installation. We also tested out the volume needed to be input into the headphone to cancel out the surrounding sound.