Where we are designed to see things from our own perspective, it may be difficult to perceive ourselves as nodes in an entire system. Our project aims to evoke an awareness of the interstice between the individual and collective perspective, particularly in terms of sound.
This wearable will thus allow the user to visualise the transforming soundscape around them (and by extension, their impact on said soundscape). This is done through the use of LEDs which change hue and brightness depending on the surrounding soundscape of both past and present, as picked up by a microphone.In doing so, the user will become aware of the gap between your sound as your own, versus as a part of an environment.
Incidentally, the form of the wearable is also meant to promote portability, such that users can freely roam to acquire different inputs.
More in-depth details can be found in our previous posts:
- Initial Sketches: W3 Ideation (EC) | W3 Ideation (Liz)
- Body-Storming Lo-fi: W4 Body Storming
- Reiteration of design: W5 Prototype
- Beta version interactive prototype: W7 Midterm
Here is a condensed summary:
The W5 prototype featured the actual shape of the project (as worn on the arm), while the W7 prototype featured the actual code of the project, as shown in the following videos:
Since then, we’ve made an array to store previous inputs, as per user feedback during the midterm presentation:
What are the changes you have made to your project since your initial sketches?
EC: I think the changes ultimately consists of message, and, by extension, form. (I work on a form follows function basis.)
Message, in that it was originally tackling the interstice between the viewer’s individual conversation and their collective environment’s conversations. The term “conversation” in itself forces two necessary conditions in the project’s manifestation: multiple participants and speech. The form would then have to represent these two aspects such that the participant can perceive the gap.
The reason for change mostly boil down to lack of engagement. There was an assumption that the viewer’s individual conversation is something they experience directly, where we only need to show the collective environment’s conversations. Consequently, the form was poorly suited to interactivity, where it only requires a passive collective. The individual may be part of the collective, but this nevertheless undermines the level of engagement you might feel, in lacking active participation.
Thus, the message changed, as mentioned above, to fit a form better suited for direct participation.
Liz: The shape that our project manifested itself have evolved many times, first being a sort of board for text to appear on, to a hat, to a glove, the main idea of it was still portability to sample as much inputs. Subsequently, the output for our project was also supposed to be sound to text, then sound to colour and a reaction round. However, we reduced it to become sound to colour, and finally evolved to ‘record’ the colour of various seconds/minutes in time to show the idea of interstices.
What are some users feedback you have incorporated into the reiteration of your project from the body-storming and mid-term user testing session?
EC: Previous changes made based on the body-storming session can be found here. (In essence, changing the form to be more effective, more welcoming, and more encouraging of shared than individual experiences.)
The most worrying and consistent feedback is on the clarity of the message. Everyone thus tried to solve this problem through suggesting various possible alternate messages and forms, like the individual versus their peers, or an interactive space, or roles in conversations. While they were all interesting, we decided to go with the aforementioned in that it best suits what we currently have, and our desire for a portable wearable which encourages moving around to gather environmental input.
Like users have mentioned, it would probably be more effective if there were more sensors and actuators, such that the effect can be more clearly visualised. We will definitely be working on that. (I also think that problem arises partially from that tests have been conducted in relatively controlled environments, such that the “collective” isn’t as evident. That may be a consideration for future tests.)
Another interesting idea which we plan to incorporate is a timeline of changes, which will again contribute to a clearer visualisation of the individual effect on the collective. It’s hard to visualise without some point of temporal comparison.
Liz: The meaning behind it is still very hard to grasp, since we did not record the gaps between sound in a very explicit manner, hence did not show the ‘gaps’ between one person’s sound from others/the background. But other than that I think the idea of using the work is quite straight forward.
Where do you think your interactive project will fall on the continuum of interactivity? Why do you think this is the most appropriate mode of interaction for your participants or audience?
EC: Maybe 75%? There are two main ways of interaction, emitting sound and moving about. Sound directly allows for individual input, while movement allows for the input of different environmental inputs. The third way is time, where a timeline of changes would create a more diverse output set. This, combined with that the code is such that it is difficult to replicate the exact same output, makes the piece pretty interactive. It loses marks in that it excludes sound quality and has limited outputs (i.e. only LED), such that the possible outputs are somewhat limited.
Liz: On a scale of one to ten, I believe a 7, because the project depends on person’s audio input to cause a change to the LEDs. The idea we are going for is to show the gaps between sound in the environment and the person themselves. So in order to do that one must interact to change it. The most appropriate mode of interaction in our case would be the use of sound for the audience.
Apart from responding to the user, does your interactive piece include elements where the content changes over the amount of time your user has been engaged?
EC: After the midterm presentation, yep! We decided that it’s necessary to provide points of comparison across time such that participants can properly visualise the impact of their sound. So, maybe something like, LED 1 = 3s ago, LED 2 = 2s ago, LED 3 = 1s ago, etc. It’s more a constant correlation than an increasing or decreasing functionality as time passes, though.
Liz: Yes it does, the idea is to have a record of the previous inputs for the audience to see the change of the ‘environment’ sound before and after you have interacted with it. So there might be gaps where no one ‘actively’ interacts with the work, but there will still be an output in the form of environmental noise.
Based the diagram above, which characteristics does your interactive project fall under? Explain why these characteristics can be used to describe your project.
EC: For classification, I think it falls under everything, but ultimately under that the user is responsible for all events? It’s up to the user as to what they want to be. For example, they could go to a quiet area and become the singularly valued collective voice. Or they could go to a noisy area and shut up, becoming unnecessary. Or, go to a noisy area and speak, becoming one of many. (Limited role is mostly entrenched in the fact that the possible inputs and outputs have already been predetermined.)
For characteristics, I would say it’s… an intuitive selection? It’s natural to make sound, and natural to notice there’s some kind of change. But also some element of the experience being monitored and used virtually and parallel world, since there’s a simultaneous experience between the LED display and the real world?
I’d say the structure is open, since the user is free to run around and get different arrays of inputs, although there’s some element of feedback about location (similar soundscapes will give you similar results).
Liz: forgot this question existed and will do it later