An image on the screen that follows you whenever you move either left or right. It responds to your position as you gaze as the image.
How It Works
Step 1 : Face Tracking
Firstly, image/face tracking needs to be applied to capture the position of the user’s face. jit.grab is used to get image from the camera then cv.jit.faces is used to draw a bounding box is around the face.
Since the central point of the bounding box is needed to identify the position of the face, the width is divided by 2. The face recognition should be 320 by 240 for a fixed resolution. If normalized, 0 to 1 is used (where 0.5 is the middle value).
Step 2 : Make the Video/Asset
Then, a video is created as an asset by using the playback function. There are various ways to do this such as playlist, jit.qt and jit.movie.
Step 3 : Convert (x, y) Coordinates to the Frame
The main concept of this assignment : one coordinate gives you a frame. Therefore, a scale is used to map the input range of float values to the output range. A slider is placed as well to visualize the values better.
The steps for this assignment was quite straightforward : make the asset, apply playback, track the face, and integrate (how to go from x, y to frame). However, the tricky part was to convert the coordinates values to frame number. I realized that the longer the asset is (in terms of frame numbers), the smoother it gets.
The only problem I had that I could not solve was to make the video stop when the (x, y) is (0, 0) : as there was no face detected. Otherwise, the frame will jump abruptly to the first.
*No worries, no pet was abused in the process!