Director's Pool
Unifying Image Synthesis and Analysis on a More Solid Scientific Basis

Donald Greenberg, Cornell University, Henry Fuchs, University of North Carolina at Chapel Hill, Ruzena Bajcsy, University of Pennsylvania


We propose a continuing collaboration between the research groups of Don Greenberg, Ruzena Bajcsy, and Henry Fuchs as part of the Center's overall collaborative telepresence project. The purpose of the collaboration is twofold: to unify efforts in 3D image synthesis and analysis, and to place the overall work on a more solid scientific footing.

We propose building a conceptual image synthesis and analysis framework around Cornell's Light Measurement Laboratory and physically-based rendering environment. A cornerstone of this is Cornell's already developed capability to generate synthetic images that match precisely photographs of the same controlled environment. A key notion of this proposed framework is that it can be used to effectively unify image synthesis and analysis on a more solid scientific basis. Given accurate geometry, lighting, and BRDF information, one can synthesize realistic images of the scene. Given realistic images along with lighting and BRDF data, one should be able to accurately extract the scene geometry. Furthermore, the extracted geometry could be used with lighting and BRDF information to reconstruct scene images using (for example) image-based rendering techniques.

With this framework we can quantitatively evaluate a variety of image synthesis, analysis, or reconstruction approaches, selectively replacing or adjusting them and then reevaluating the results based on the distance of the calculations from the known "gold-standard" photos. For example, we can evaluate geometry-capture approaches because we would know the actual scene geometry. We can quantify subsequent scene reconstruction approaches because we would have access to the gold-standard photos or even near gold-standard images (realistic synthetic images but not actual photos). We believe that a wide variety of researchers would benefit both from this approach, and the gold and near-gold data.


We could evaluate the effectiveness of image-based rendering scene reconstruction in terms of a user's sense of immersion, as opposed to through-the-window realism. We can vary the number of cameras, real and/or simulated, as well as vary the number of correspondence points determined.

In addition, with an eye toward real-time scene reconstruction, we will experiment with incremental reconstruction techniques. For example, the geometry extraction at each step could be based on the results from the previous step.

We propose continuing our collaborative effort to:

  1. synthesize a collection of Cornell cube images and compare with photos (this is partially completed),
  2. extract the scene geometry using the (known) lighting, BRDF, and possibly previous scene geometry information,
  3. reconstruct new images from different view points using image warping, and then
  4. compare these reconstructions with the corresponding near-gold Cornell synthesized images.

Frameworks and Data

In progress to date on this project, we have extended our three labs' experimental frameworks and prepared data for this collaboration with progress in the following areas:




Individual Site Tasks and Benefits

Cornell will make available the Cornell cube data, photos, geometry, BRDF, lighting, and image synthesis view-dependent data, and assist with the comparison of original and reconstructed images. We believe that the results may provide evidence supporting an additional need for accurate scene interpretation, as well as feedback on the fidelity of the interpretations from a reconstruction perspective.

We propose that the University of Pennsylvania extract the necessary geometry from real and/or synthesized Cornell images, making use of a reasonable set of partial scene information, e.g. geometry of walls, and/or lighting and material characteristics of the environment. Our collaboration will provide Bajcsy et al. with a measure of results against the truth (for scenes where the geometry is known).

Finally we propose that North Carolina synthesize the necessary images, coordinate with Bajcsy's lab to obtain the necessary scene geometry, reconstruct the original images using image based rendering techniques, and then (with the assistance of the Cornell lab) compare the reconstructed images with the original images. This should provide both Cornell and UNC with an opportunity to evaluate various image-based rendering tradeoffs as they affect fidelity and telepresence.

Strong Points

We believe that continued collaboration between the research groups of Don Greenberg, Ruzena Bajcsy and Henry Fuchs can substantially advance the state of the art in scene reconstruction from captured images. Individual strengths of the labs include:

Taken together, the strengths of the individual groups can be leveraged in developing a unique environment for the long-term investigation of image synthesis and image analysis problems. In particular we envision working together to create a closed-loop experimental environment that facilitates controlled quantitative comparisons of a variety of synthesis and analysis techniques.

If the driving problem for the last 25 years in graphics has been generating images as realistic as photos of scenes, the next 25 years may well be giving us the experience of telepresence. This research framework may help in enabling that dream to become a reality.

Home Research Outreach Televideo Admin Education