Here is the re-filming: Click here
Here is the re-filming: Click here
Updated as of 29 October 2016
Over the past 2 weeks, we have completed the code for the individual parts of ume. However, here are some concerns:
Light output is not bright enough to be seen under normal day light, we are considering of changing it to WS2812 LED. However, we will first be testing our ultra Yellow LED lights first before changing it to WS2812. (Just purchased Yellow LEDs today)
A short pause occurs when sensing continuous motion. Will be re-examining the code to resolve it
The following week will see us:
1. integrating the different codes into one code
2. continue building the ball
3. ume to work properly individually
4. attempting to connect 2 umes together
We are in the midst of building the physical components of ume, and have decided on using a pre-made hamster ball.
Reasons for choosing it to house our electronics was of it having slots for wires to extend out, and the sizing is optimal. There are a variety of sizes for hamster-balls, raging from 10-14cm. We will be buying it second-hand off Carousell, getting one with a 11cm diameter for a start.
Other than the hamster ball, we have also considered getting empty, clear see-through Christmas baubles from Spotlight. However, we opted for the hamster ball as the christmas baubles was a little too small to contain our breadboard, even though it looked aesthetically more pleasing that a hamster ball.
Final Project by Yi Xian and Tania
Project Brief: A Collection of Consciousness utilises the method of question and answer to seek out one’s subconscious – psychological and emotional state alike. It then shows the participant a visual and auditory representation of his subconscious.
The set up: A total of 2 screens – one huge projector screen, and another smaller computer screen to look at for the questions. There is also a 3-step ‘dance pad’ for participants to select their answers.
An example of a computer graphic shown on screen:
The ‘dance pad’ was situated right in the centre – enjoying a vantage point of the screen. However, participants had to continually turn their head sideways to look for the questions.
Initial idea:
Initially, I planned on having the laptop right in front of the screen, but decided against it as it would block the visuals. Also, I planned on using motion sensing to trigger the selection of choices, but decided against it – past experience with it has taught me that different lighting would affect the sensing, and motion sensing as a whole might not be very accurate. Hence, I decided to change to using a touch sensor as my item of choice to trigger off the feedback.
Questions
I also planned to have a total of 20 questions at the start. Later, I scaled it down to 7 questions as the patch was becoming too large. I based the style of my questions on the ISFG Personality test. I also researched on the different determinants of one’s consciousness, or to be more accurate, psychological state and came up with the following:
Thus, I decided to base my questions on these 3 factors:
1. Mental Wellness
2. Emotional Wellness
3. Intellectual Wellness
I do understand that the limited questions might not raise an accurate determination of one’s psychological state, but at present it manages to cover all 3 factors as a whole.
Here are my initially planned 10 ideas:
I planned for the user to finish playing the entire round of questions, and the different answers would add on different, unique layers onto the graphic – creating a different, unique shape and colour for each player at the end of the game.
Technically, I used jit.gl.text and banged the questions (in messages) at intervals.
Here is a small portion of my subpatch for the questions, which were to be shown on the computer screen later during the actual artwork.
Graphics
The changes in shape were made by sending messages (eg. Prepend shape – torus) to each individual attrui. In brief, these were altered: shapes, colours, how fast or slow the graphics ‘vibrate’, the range and scope it expands until, and the ‘quickness’ or ‘slowness’ of how fast it moves. Graphics was also rendered and altered in real-time.
To create 3D graphics, jit.gen, jit.gl.gridshape, jit.gl.render, jit.gl.mesh and jit.gl.anim were used. While I initially planned to use jit.gl.mesh, and plot out my own points to be animated (creating a more abstract/different/random shape), I spent a total of 5 days to figure it out, but failed to, hence the final project turned out different from expected. However, I am still pleased with the final outcome.
Here is my patch for reference.
The graphics was the most challenging portion of my entire project, making or breaking it. Perhaps it was not the most brilliant idea to foray into the untouched lands of openGL, but as I have always been interested in 3D graphics as a whole, it became an interesting experience for me.
Sensors/Teabox
3 dance pads were used, and allowed for participants to send in their chosen responses to the questions to MaxMsp. Only 3 options were allowed, as it seemed more intuitive to the common user.
During the actual tryout on friday 22nd April, I realised that the trigger was far too sensitive, and questions were gone through far too fast. Hence, on the afternoon, I further added the below objects to slow down the sensing:
The counter value is to be changed – depending on how fast or slow you prefer the trigger to be.
Sounds
As the graphic rendering and the sounds were tied between each other, I decided on altering the sounds to firstly: give the sense of being enveloped in the whole experience, and also to further tie up the connection between sound and graphics. As aforementioned, the vibration of the graphics is with accordance to how loud, in terms of decibels, of the sound itself.
A total of 3 sounds were used. The longer the participants played, the more ‘rich’ the sound becomes: it becomes more shrill (higher pitch), and the speed of the soundtrack decreases, just to name a few. The longer you play, the more unorganic and synthetic the sounds become. It ties in with the visuals, which starts of as a red, beating object (that reminds one of the heart), but it starts to gradually take on inorganic, abstract forms. This was symbolic of the birth of a lifeform – firstly, you start off as a beating heart, but life’s experience will gradually shape you into becoming a unique being.
Here are the 3 sounds used:
See the final artwork in action:
Just putting it here for recording purposes! Will work further on the project idea.
Idea:
Labyrinth
My idea comprises of a physical maze, where a projected image of the user running will be projected on the physical labyrinth itself. The aim of this game is to catch the doll in question, which will physically be present, and mobile via the use of motors, in the labyrinth using sensor-automated responses. Upon reaching the end goal of catching the doll, the doll will stop moving.
Project mapping will be conducted on the doll in addition to the user’s running figure, to give the doll a living feel (as we project her face, and her various emotions) in an attempt to increase feedback given to the user. The longer the user takes to catch the doll, the more the doll’s face will morph unrecognisable.
The doll’s facial features will be inspired by the Japanese wooden doll.
Projection mapping of the user will comprise of a stored footage of a human stick figure (the avatar). In order to trigger movement of the user’s avatar, the user has to stand on 4 square boxes, similar to the dance sectors of arcade dance games.
Feedback:
– find a reason why the doll moves
– Reason why you want to mix physical & projection together
– Maybe buy remote control call put doll on it (easier to do than Programme it)
– Control the maze will be an issue
– Maybe the maze will be projected
– Sphero ball to replace the doll. SDK. SPRK edition, figure out how to access the code
– Feedback when touching walls etc