At the start of the Lesson Professor Biju had a preamble to make us understand how a camera and how a projector works. A camera has a “film back” which is an image sensor that captures the image and a projector is an inverse camera (which has n image size) that throws out an image. Keeping the positions of the camera constant. If we move the screen away from the projector, the image would increase in size as the screen captures a larger cone of projection.
So to keep an image still on a screen regardless of how the “screen” moves we need to have a camera and a projector at the same exact position with the same internal image/film back size cancelling each other out. So within the software we have a digital camera that is in the same position as the projector in the physical world.
Once this is done we need to feed the digital camera know the position of the screen in real time. At the motion capture lab this is done through infra red cameras and reflectors. For this to work we need we need at least 4 cameras (to establish a 3d space) and also at least 3 reflectors . If only two reflectors were used things would not work as we aren’t able to capture rotation about the y axis. Either that or we need enough reflectors to form a plane as you can capture the rotation about all axis when you have a plane.
Now once the camera knows the position of the plane, it will just calculate and re adjust its focal length to match the distance of the screen from the projector and thus the image stays the same size.
Now if we were to make public art and we want to project something on people we can’t use the same set up we have in the motion capture room as it is not possible to stick reflectors on every passer by and thus this is where the kinect comes into play. The kinect senses depth instead by projecting a pattern and reading distortions in the pattern to sense where the plane is. It does not require separate reflectors and thus is less intrusive in terms of design.
So how does this relate back to projection mapping? Well learning about mation capture you immediately realise that the logic is very similar to projection mapping. Simply by having more points at different depths on the surface, and feeding it to the digital camera so that it can change its focal length depending on those points, we will be able to have a projection that is not distorted. Thought hen the trouble to figure out here is how are we going to register the many different points. (this is the slow process of adding and clicking points in green hippo). I think motion capture is something interesting that we as interactive media students could definitely play with. At the top of my head I think it would be super interesting to combine motion capture with dance but currently with the little knowledge that I have about motion capture, that is still a far faaar off dream.