From last Week’s flow chart, En Cui and I worked on creating a mock up circuit which follows a segment of the flow chart.
Mock Up model 1
We started with a basic microphone set up.
From here we tested to see if we can get the input reading through the serial monitor of the surrounding sounds, and the changes when we spoke into the microphone.
Serial Montitor showed that the input was either a ‘high’ at 1022/1023, or a ‘low’ 0.
Conclusion at this segment:
We thought our microphone was iffy
Nonetheless we continued, as the microphone was still able to detect sound we decided it will be good enough for now and we will solve this issue later.
Mock Up model 2
Subsequently, we added onto the first model to include the LED output.
From here the code was expanded to include a code to control an RBG LED and to read frequency and Volume of the surrounding environment. Initially, the code was done in random way where for every 3 digits that the frequency had the digit in the hundred place would be the percentage of red, tens the percentage of blue, and ones for green, that would make up the colour that the light bulb would create.
Watch Video at:
The colour of the lightblub was coming out abit too randomly
So from there we attempted to group a range of frequencies and match them to a colour. Subsequently we made it such that the volume is matched to the brightness of the LED.
After our short dicussion, En Cui and I have decided to combine the ideas of the talking door and the concept of gaps between multiple conersations to create an interactive hat. The idea of the hat was to create both a visual (Different coloured LEDs for different pitch and/or volume) and audio output whenever someone spoke. The idea was that it would let out a different sound depending on the pitch and volume it sensed from the surroundings, meaning that it will consider the environmental sound as a whole.
Watch the video here:
What did you learn from the process?
From this process we have learnt that our concept is hard to connect with the audience, so we should make it more diresct. Though the idea of using the object is fairly simple as it is what it is, which means the idea of a found object is really strong enough to have the audience interact with it without giving much instructions. The reactions can be learnt along the way. We should also make it such that what ever the reaction is should be within the view of the participant, as the lights are on top of the hat. Also our project is very context driven as it relies a crowded, noisy area to link to our concept of the gaps in between conversations.
What surprised you while going through the process?
Shout out to our Tester who is especially cooperative :3c. There was a lot of confusion trying to link the project to its concept, it is not directly understood as an individual or group concept, but I guess that is what happens when you only have one tester and your project responds to them whether in a group or not. The idea of the hat was for portability but we have the idea that it will react to its environment no matter if it is worn or not, this results in some comments that we might want to change the shape it takes. We are also worried about how to convince the audience whether they can grab the object freely.
How can your apply what you have discovered to the designing of your installation?
So we might consider changing the appearance of the artwork. We might tweak the message a bit, and maybe have multiple small things instead of one big thing, to make it less intimidating. Also Lei said we can use P5.js to do speech to text, we are kind of bombarded with endless possibilities now lol.