Rendering Research

The Actual Cornell Box
The Simulated Cornell Box

oRendering Overview


Physically-Based Rendering
oCornell Light Measurement Laboratory
oCornell Box
oLight Reflection Models
oLight Transport
oPerception: Tone Reproduction Operator
Scene Acquisition
o3D Scene Acquisition
Image-Based Rendering
oStructured Light
oModel-Based Recognition
oPlenoptic Modeling
oPost-Rendering Warp

Rendering Overview

Our primary focus during the past six years has been to develop physically-based lighting models and perceptually based rendering procedures for computer graphics to produce synthetic images that are visually and measurably indistinguishable from real-world images. Physical simulation fidelity is of primary concern.

For several decades now, computer graphics simulations have been used for a wide range of tasks such as pilot training, automotive design, and architectural walkthroughs. The entertainment industry has developed techniques for creating startling special effects and realistic simulations. Even virtual reality games use convincing imagery with great success. But are these images correct? Would they accurately represent the scene if the environment actually existed? In general, the answer is no, although the effects are appealing because the images are believable.

If we can generate simulations that are guaranteed to be correct, they can then be used in a predictive manner. This major paradigm shift will make it possible to use computer graphics algorithms for testing and developing printing technologies, photographic image capture, the design of display devices, and algorithmic development in image processing, robotics and machine vision.

However, simulations used for prediction must be provably correct. Fidelity is the key. This difficult task requires a major multidisciplinary effort among physicists, computer scientists, and perception psychologists. Unfortunately, very little work has been done to date in correlating the results of computer graphics simulations with real scenes. However, with more accurate image acquisition and measurement devices available, and with increased computer processing power, these comparisons can now be achieved.

Global Illumination Research

Over the past two years the Center has articulated and refined a framework for global illumination research that will be presented in a special SIGGRAPH session this August. Our research framework has subdivided the system into three parts: the local light reflection model, the energy-transport simulation, and the visual display algorithms. The first two parts are physically-based and the last is perceptually based.

Inverse Rendering and Image-Based Rendering

While the "direct" problem of image synthesis continues to dominate computer graphics, it has become increasingly clear that "inverse" problems are also vitally important. In particular, the need to acquire a detailed 3D model of an existing object arises naturally in several contexts, such as augmented reality, in which synthetic imagery is to be fused seamlessly with images of actual scenes. Doing this requires a knowledge of scene geometry, materials, and possibly illumination, any of which may need to be inferred from the images themselves. This problem, called "inverting the rendering equation" within graphics, corresponds exactly to the class of problems central to computer vision.

The recovery of shape and material information from images is extremely challenging and is far from being solved in complete generality. However, we are actively pursuing a number of approaches that promise to be extremely useful, albeit not completely general.

Rendering Bibliography

Home Research Outreach Televideo Admin Education