Director's Pool
Single Imaging Point, Multiple Imager, Wide Field of View, Camera System

Henry Fuchs, PI, UNC Chapel Hill, Richard F. Riesenfeld, PI, University of Utah

We are committed to long term collaboration and to building better tools for more effective communication and presence over distances. Our experiences of commercial systems and experimental systems have shown us what is lacking in the current teleconferencing systems. We believe that better presence will help make theses systems more useful and; therefore, more widely used.

To increase presence, a greater sense of realism is needed. There are three main courses to create better presence in both the sending and receiving side in 2D visual teleconferencing systems:

  1. Wide Field of View (WFOV)
  2. Very High Resolution (VHR)
  3. Real Time (RT)

There have been many systems which have combined 2 of the three qualities above, such as: Omnicam with WFOV and RT; Quicktime VR with WFOV and VHR; and multichannel teleconferencing with VHR and RT. There are several systems that can supply the receiving end with these qualities listed, Panaram display systems for example, while there are not any sending ends that can supply the information with all of the above qualities.

One of the bigger challenges we feel we have solved in a multi-camera WFOV system is in a "common center of projection." This makes the overlapping and the alignment of images match without the near-far warping problems of non-common center of projecting systems. We also plan on implementing a system where instead of the sender having full control of what he wants the team at the other end to see, the receiving end actually gets to control their pan and zoom across the entire, seamless image to what they would naturally be of interest to them, rather than what the sender dictates.


The Wide Field of View (WFOV) camera uses several cameras in a single cluster and attempts to reconstruct a virtual camera with a 150 degree or more field of view. All of the cameras have the same imaging point made possible by the use of angled mirrors. This allows for a seamless and optically correct image and, with a novel aperture clipping devise, will be correct across the seams. The WFOV virtual camera forms a high-resolution image that is updated at real-time rates .

Several other designs have been attempted for multiple camera, wide angle field of view, including ourselves using multiple camera spherical mirrors, but none have a true "common center point of projection" that this design has. This camera feature greatly enhances applications in videoconferencing, plenoptic modeling, and 3D interactive graphics.


Merging video streams into a single view requires careful engineering and calibration. Simply putting images side-by-side will yield distracting seams. Instead, the views must be carefully unwarped to remove distortions that exist in the machined parts and camera lenses. The cameras must also be aligned to share a common center of projection. If the cameras have different centers point of projection, the result cannot be unwarped without depth information. Since 2D image processing is much simpler than depth recovery, the cameras are locked down into set positions. When using highly concaved mirrors, the center point of projection for each camera is close (half the diameter of the sphere along with the angle off axis) but far enough off to make 3D measurements and alignments not possible. A variation where a forced common center point of projection is made is only true for a single point in each image plane - this is due to the lens effect of a sphere and the off axis mounting required.

By using flat primary mirrors, a lens distortion does not occur and the image paths can be folded to form an apparent common center of projection. This will be the basis of our design. We will have a system where six cameras stacked two high will create a wide angle viewing system.

An earlier experiment at UNC was made to test a system with curved mirrors and with a common center of projection error (variation) of 17mm or less. Initial results with two cameras and a spherical lens confirmed that our approach is workable.

We created the image by manually merging two camera images. The images were loaded into an image editing package and combined in a "lighten-only" mode with a little creative cropping. The two individual photos were not modified except for color correction. The artifacts of combining the images are seen in the lighter band in the middle of the photograph.


After the initial proof of concept, we must decide on many system issues for the next iteration of testing. We are expecting to move from curved mirrors to faceted mirrors. The next version will be approximately 230 mm long, 450 mm high, and 450 mm wide. Cameras will be mounted 60 degrees apart, and the upper and lower tier of cameras will differ by 10 degrees in their viewing direction. This setup will permit reconstruction from a common center of projection.

The short term challenges are hardware. The problems of bandwidth currently for these cameras will be eliminated by higher powered computers; however, at present, some tricks for reducing the bandwidth will be required. The acquisition platform is still being finalized. Because six video streams require a huge bandwidth, we would like to do some initial processing on the images before sending the information onto a system bus. We may reduce our bandwidth requirements by lowering the color resolution from 24 bits to 16 or 8 bits. We may also use lower frame rates for the cameras. Intelligently juggling these options and removing oversampling after image combining shall reduce the bandwidth to an acceptable level . We believe we can interleave the video streams using software selection to a multiplexer. This would allow us to handle all six streams at the expense of longer latency. The tradeoffs should allow us to balance image quality against update speed. Our initial research leads us to believe we have many options for handling six video streams simultaneously. Our options include Silicon Graphics hardware, PCI frame grabbers with on-board warping modules, and multiple frame grabbers.

Long term challenges are in algorithm designs. After the initial blending and clipping software modules are completed, effort into an easy interface for teleconferencing needs to be worked on and tried out between the STC group. This will extend over a much longer duration.

Research Challenges in Distributed Design Collaboration

The UNC and Utah collaboration on designing the Wide Field of View camera system will also become an important and challenging test case in distributed collaboration for multi-disciplinary design, as it also produces a high-performance camera system design and prototype. Distributed collaboration has been a rich area of research within the STC in its own right, and also serves as a challenging problem domain that requires advances in visualization and graphical presentation techniques, telepresence, and network-based computing. A previous collaboration in this same spirit between UNC and Utah designed and prototyped a functional Video See-Through Head-Mounted Display research instrument.

The research issues to be addressed by Utah are in several areas:

Multiple Layers of Cameras

As can be seen by the sketches by placing a circular array of cameras viewing towards angled mirrors, we can result in a continuous wide angle vision system. A full 360 degrees, if desired, horizontally. It becomes more difficult to oversample in the vertical direction and still trying to maintain a single pint of symmetry. It can easily be seen that trying to extend this FOV becomes difficult since the lenses either begin to overlap each other or get into the image plane of the other lens. A remedy to this is to duplicate the system upside down, however, this can destroy the single point of symmetry.

Mullions Between Mirrors

Between each 60 degree primary and less secondary mirror angles, there is a mullion. This can be close to the sharpness of a knife but will most likely have a radius of between 0.5 to 1mm due to manufacturing and handling realities. This mullion causes a blind band in a pinhole lens analysis of this optical design from the loss of a small field of view between each imaging mirror. However, when a full analysis of the system is conducted using parallel ray tracing includes the aperture, the overview of the aperture projecting down upon the mirror allows for a mullion half the size of the aperture throat. The degradation is a loss in brightness but the image wraps completely from one imager to the next.

One problem that does arise from the aperture model is that there also is an overview on the outer edge of the aperture path (can be looked as though a ray is the thickness of pencil to match aperture size). This overview will image itself on the next mirror or possibly past it resulting in an image blurring from focusing more than one image at the same place on the imager.

A solution to this problem has just recently been resolved. By putting a short barrier which can block the half height of the aperture size and be mounted at the median (half) angle between any two mirrors, including the two end ones, the aperture overview problem will be relieved. See Figure 4.

This aperture border also has an advantage of reducing the machining costs of trying to maintain a sharp edge between mirrors. A cut will actually have to be made to let the short, optically black, aperture walls be slid in to place. These blocks would be mounted between all mirrors and should be stand slightly taller than one half the maximum aperture opening.


Home Research Outreach Televideo Admin Education