At UNC we are pursuing real-time Imperceptible Structured Light (patent pending) scene acquisition techniques for dynamic environments. Although structured light techniques have been employed effectively for decades to extract depth from scenes, heretofore they have not been practical for scenes with humans, since the changing patterns have been too disturbing. The new technique, which minimizes the disturbing visual effects, consists of projecting in very rapid succession the pattern of interest and then its complement, so that when integrated over even a short period of time (say, 10msec), humans perceive a flat field of light. We project dynamic light patterns briefly into the scene, and then with the aid of synchronized cameras obtain and analyze scene images to determine range (depth). We combine the range information with properly registered color images to obtain dense 3D reconstructions, and then merge these reconstructions into our existing shared virtual environment so that they can be displayed at remote environments.
At Caltech we are using similar structured light techniques to obtain range images of a single static object from multiple viewpoints and are working on robust automatic methods to combine that data into a unified surface description. These results will then be applied to the real-time dynamic scene acquisition work at UNC.
The Center has been collaborating with Professor Ruzena Bajcsy at the University of Pennsylvania to develop completely passive image-based methods that do not rely on controlled light or inherent scene textures. Currently we are able to use the automated depth-extraction methods of UPenn to do non-real-time image-based reconstructions of relatively small scenes or objects. As with the structured light reconstructions, we then merge these reconstructions into our existing shared virtual environment.
Full Research Bibliography