For this assignment, we had to change several things from the emulation into a DJ-like interactive detection thing where different sound tracks will be played when something of high contrast is detected in the specified box for detection.

 

This is my edited draft patch with 8 areas of boxes in red for detection. The connections made for sound tracks is shown at the lower half of the picture where specific steps are linked to each output value and specific sound tracks are also allocated.

This is my edited grab sequence where the coordinates of the red boxes are defined (X1 Y1 X2 Y2) together with the pathway for each box.

This is my PlaySound patch where the playlist exist and each sound track is mapped to one box.

The completed VEJEE and the video!

Hello! I am back with another assignment. This time we had to do the facial detection with a face swapping filter! Instead of Brad Pitt, I chose Ashley Greene!

This is the picture of her I have chosen.

For step 1, I have removed Brad Pitt and uploaded her image into the patch.

However, because I did not link to the original patch properly, the face filled up the entire window and not according to the matrix detected. After meddling for a while and changing things around, the patch below shows the version that does the facial detection coordinates correctly. The left side of the picture shows opening of the camera and tracking the face position. The section on the right shows importing of her image and placing the image on the face detected.

 

The attributes ‘@dstdimstart’ and ‘@dstdimend’ tells the matrix the start and end position where it should be. Therefore, this function can be used to match the location of the image to the x and y coordinates of the detected face.

Step 2:

Step 3:

We can blend the image and the detected face together by using ‘jit.alphablend @mode 1’. To soften the edges, we can use ‘jit.fastblur@mode 4 @range 3’.

This is the final video!