Category Archives: INTERACTIVE II

RM 4 – Opto Remake

Setting up Raspberry Pi

As it is my first time using raspberry pi and the servos, it took some time for me to to get the camera for pycharm cv2 to work. I enabled the camera in the command prompt and could get the camera working via the command prompt and module picamera, however errors (especially relations to gstreamer) popped up when using cv2.

Turns out it was an index error in the cv2.VideoCapture and just had to change it to (-1). I decided to make use of the Wekinator as well to control the output as we would most likely using it for our final project as well. The range is controlled from 0-1, angle degree 0-180, and is further adjusted using the def scaleBetween.

Changing the script (to Wekinator) with reference to RM3 to detect the face as well as changing the def filter_handle (from Wekinator) to allow float output. Initially only sending point 29/30 (which is in the middle of the face) to Wekinator but ended up going back to sending all the points as it worked better.

Due to Wekinator being unstable, the motor ended up jittering quite a bit at it’s place. But this reminded me of a rabbit sniffing and looking for food. Hence I created a small bunny mascot with moldable plastic.

Video Demo with Wekinator

 

 

RM3 – Cheese Remake

I initially had trouble downloading the package needed for python, dlib. And nothing was working for me, be it reinstalling python and pycharm, downloading the package online and compiling it using the setup.py given, from command prompt, and even installing through a .whl file. Finally, what worked for me was using Visual Studio’s Developer Command Prompt to install it.

As the use of Wekinator with communication with Python was new to me, I played around with wekinator, with the different models it has. I compile both given python scripts into one and felt that the all continuous model worked well in displaying and giving the output.

Video Demo

I then decide to further test out the other gestures (head turning) in wekinator as well as making use of the output given onto processing. Depending on the state (1 or 2), the small ball will move either direction along the x-axis. Wekinator’s classifier model worked well for this.

Video Demo with Processing

RM2 – Graffiti Remake

Attempt with Python

Experimented around with the simple blob detection, especially the characteristics. At first I inferred the characteristics of the blob detection was responsible for creating the shapes on the canvas. I soon learned that the characteristics were for filtering of the light source or blobs rather than the shapes created. And after filtering the blobs, they are drawn over with circles (in response to the size of the blobs) using drawKeyPoints.

I ended up playing around with the colours of the threshold, size and colour of the circles drawn on.

Video Demo

graffiti_demo

Further implementations

Audrey and I decided to make further improvements by experimenting with background subtraction (frame difference) along with the blob tracking, and finally implementing a clear function to clean the canvas when needed.

Initially, we thought of using time (like a count down, or after a certain period of time) to clear the canvas, however this proved to be not user friendly. We then assigned the right mouse click to AddWeight alpha to 0, clearing the canvas and then recapturing the webcam image for the background of the canvas.

Finally we adjusted the characteristics of the blob detection  to filter out the white difference blobs and have drawKeyPoints over them, mainly the threshold, area.

Video Demo 2

graffiti2_demo

 

RM1 – Mirror Remake

Attempt with Python

With the python scripts, the emulation’s brightness was not working well with my camera resolution, so I used this block of code to figure it out, and this is what it printed.

 

 

 

I experimented around with the width and height for the largest face, which affects the ratio influencing the brightness.

Video Demo

python_vid

Attempt with Touch Designer

Attempted to create an emulation with Touch Designer. It KIND OF WORKS? But instead of relation towards distance, or the size of the faces, I tried out using the difference of movement in the screen. A simple wave of the hand from a distance away creates a smaller area of difference compared to doing it up closer.

 

With experimenting with the threshold and slight delay of images, it can capture a more accurate area of distance. Hence, when the area of difference is large enough, this affects the opacity of the colour on the screen. This, sadly said only works if the subject is moving.

 

 

Video Demo

touchdesigner_vid

Attempt with Max MSP

Having trouble getting the files to work due to not being able to access and use the webcam, will continue to work and try to solve it. 🙁