OCT.1
SCAAT: Incremental Tracking with Incomplete Information

Greg Welch

Abstract I will present a new Kalman filter-based method for tracking a user's position and orientation in virtual environments. The method uses a sequence of single sensor observations (constraints) as opposed to groupsof observations. I refer to this new approach as single-constraint-at-a-time (SCAAT) tracking.

As compared to current multiple-constraint approaches, the SCAAT method improves tracking accuracy, timing, and flexibility. It improves accuracy by properly assimilating sequential device observations, by filtering sensor measurements, and by autocalibrating system parameters concurrently while tracking. It improves estimate rates and latencies by producing a new estimate with each new individual device observation. It offers flexibility by facilitating user motion prediction and multisensor data fusion.

We are applying the SCAAT approach to 3D tracking for virtual environments. However I believe that this work may prove to be of interest to the larger scientific and engineering community in addressing a more general class of tracking and estimation problems.

See alsohttp://www.cs.unc.edu/~wel ch/scaat.html

OCT.3
Psychophysical Methods in Perception Research

Jim Ferwerda

Cornell University Psychophysics is the quantitative study of the relationship between the physical properties of the world and their perceptual appearances. Physical objects and events have measurable properties like area, weight, and intensity, and when we perceive these objects we say they have a certain size, or are so heavy, or so bright. But while we can directly measure the physical properties, we can only infer their appearances from observers' subjective responses. So psychophysics offers a set of unbiased experimental procedures that allow researchers to indirectly measure the percepts caused by different physical stimuli. The results of psychophysical experiments can be used to establish predictive mathematical models of perception that can be used in graphics, image processing, and computer vision. I will give a broad introduction to the field of psychophysics, starting with classical methods for measuring perceptual thresholds, then describing advances in signal detection theory, and ending with a discussion of psychophysical and psychometric scaling methods.

OCT.8
A Survey of Solid Freeform Fabrication (SFF) Technologies
Rich Riesenfeld, Sam Drake, and Lee Weiss

ABSTRACT: SFF is a promising new field that has been largely promoted for use in so-called "rapid prototyping." In addition to the now widely familiar "stereo lithography" method, various competing technologies are either commercially available, or soon will be. The approaches span a wide variety of physical, chemical, and material processes, but all achieve the goal of building up an artifact in a layer by layer manner. This lecture will cover most of the approaches, and make a comparative evaluation. Each method has its particular advantageous and drawbacks.

OCT.10
Gestural User Interfaces for 3D Modeling

Bob Zeleznik


This talk will present ongoing research in gestural user interfaces for 3D modeling. Included will be a discussion of the Sketch system for rapidly creating approximate 3D models. Also covered will be up-to-date research in two-handed interfaces, as well as non-realistic rendering.

Also see: the SIGGRAPH '96 proceeding or Bob Zeleznik's Home Page

OCT.22
A Particle-Based Approach to Cloth Modeling
David E. Breen

The first half of this talk will describe the particle-based model for simulating the draping behavior of woven cloth first proposed by Breen et al. It will present a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, data is obtained that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that I am able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.

The last half of the talk will summarize two recent efforts to extend the original model from one that can only predict the final draping configuration of a single piece of woven cloth to one that may be used to produce dynamic simulations of cloth. The first extension has been developed by Eberhardt et al. at the University of Tuebingen, Germany. This work attempts to perform a dynamic simulation using a modified version of the Breen model. The second effort is being conducted at Texas A & M University by House and DeVaul. They are focusing on developing fast dynamic constraint techniques for rectilinear grids of particles.


 Special Issues on Cloth Modeling
 
 IEEE Computer Graphics and Applications,
 Vol. 16, No. 5, September 1996.
 
 International Journal of Clothing Science and Technology,
 Vol. 8, No. 3, 1996.
 
 Cloth Modeling Summary Article
 
 H. Ng and R. Grimsdale,
 "Computer Graphics Techniques for Modeling Cloth,"
 IEEE Computer Graphics and Applications,
 Vol. 16, No. 5, pp. 28-41, September 1996.
 
 Particle-Based Model of Cloth Articles
 
 D.E. Breen, "A Particle-Based Model for Simulating the Draping Behavior
 of Woven Cloth," Ph.D. Thesis, Department of Electrical, Computer and
 Systems Engineering, Rensselaer Polytechnic Institute, 1993.
 
 D.E. Breen, D.H. House and M.J. Wozny,
 "Predicting the Drape of Woven Cloth Using Interacting Particles,"
 SIGGRAPH '94 Conference Proceedings
 (Orlando, FL, July 1994) pp. 365-372.
 ftp://ftp.ecrc.de/pub/ECRC_tech_reports/reports/ECRC-94-16.ps.Z
 
 D.E. Breen, D.H. House and M.J. Wozny,
 "A Particle-Based Model for Simulating the Draping Behavior of
 Woven Cloth," Textile Research Journal, Vol. 64, No. 11,
 pp. 663-685, November 1994.
 ftp://ftp.ecrc.de/pub/ECRC_tech_reports/reports/ECRC-94-19.ps.Z
 
 D.H. House, R.W. DeVaul and D.E. Breen,
 "Towards Simulating Cloth Dynamics Using Interacting Particles,"
 International Journal of Clothing Science and Technology,
 Vol. 8, No. 3, pp. 75-94, 1996.
 ftp://ftp.gg.caltech.edu/pub/david/ijcst96.ps.Z
 
 B. Eberhardt, A. Weber and W. Strasser,
 "A Fast Flexible Particle-System Model for Cloth Draping,"
 IEEE Computer Graphics and Applications,
 Vol. 16, No. 5, pp. 52-59, September 1996.
Animations associated with the lecture:
http://www.gris.informatik.uni-tuebingen.de/gris/proj/hc.html

 
 CAD for Broadcloth Composites Articles
 
 M. Aono, D.E. Breen and M.J. Wozny,
 "A Computer-Aided Broadcloth Composite Layout Design System,"
 Geometric Modeling for Product Realization (IFIP Conference on
 Geometric Modeling Proceedings), (North-Holland, Amsterdam,
 September 1992) pp. 223-250.
 
 M. Aono, D.E. Breen and M.J. Wozny,
 "Fitting a Woven Cloth Model to a Curved Surface: Mapping Algorithms,"
 Computer-Aided Design, Vol. 26, No. 4, pp. 278-292, April 1994.
 
 M. Aono,
 "Computer-Aided Geometric Design for Forming Woven Cloth Composites,"
 Ph.D. Thesis, Department of Computer Science,
 Rensselaer Polytechnic Institute, 1994.
 
 M. Aono, P. Denti, D.E. Breen and M. Wozny,
 "Fitting a Woven Cloth Model to a Curved Surface: Dart Insertion,"
 IEEE Computer Graphics and Applications,
 Vol. 16, No. 5, pp. 60-70, September 1996.
 ftp://ftp.gg.caltech.edu/pub/david/darts.ps.Z

Computer Science Department Stanford University

A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Apple's QuickTime VR is one example. In this talk, I will describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images.

This is joint work with Pat Hanrahan. For those who saw Pat's Siggraph '96 presentation, this talk will be a longer version. In particular, I will focus in more depth on the following issues:

  • o Alternative ways of parameterizing light fields
  • o Line space interpretations, ray coverage, and sampling uniformity
  • o Technical challenges related to the design of camera gantries
  • o Implementation and performance of our VQ compression method

Finally, I will summarize our plans to construct two practical "light field cameras", one that employs a single camera and a 4-DOF motion system, and another based on a planar array of CCDs with compression chips interspersed.

OCT.29
Fidelity and Rendering
Don Greenberg Our goal is to develop physically based lighting models and perceptually based rendering procedures for computer graphics that will allow synthetic images to be generated that are visually and measurably indistinct from real-world images. The three principal components of our rendering, light reflection models, light transport and perception, each include methods for verification to improve both fidelity and algorithmic efficiency.

NOV. 7
Current Research in Light Transport Simulation
Peter Shirley

ABSTRACT: I will review efforts such as radiosity, ray tracing, and image based rendering, and then will focus on the current barriers to generating images that are realistic in a scientific sense, and the strategies being used to cross these barriers.

OCT. 29
Theoretical Foundations of Rendering
James Arvo

The rendering equation, introduced by Kajiya in 1986, is an integral equation that provides a theoretical foundation for essentially all rendering algorithms. This talk will focus primarily on the physical and mathematical assumptions underlying the rendering equation, and summarize the most popular solution methods. In particular, we will discuss the role of thermodynamics, radiometric concepts such as radiance, irradiance, and bidirectional reflectance, and explore the use of finite element methods and Monte Carlo. Some historical perspective will also be provided by viewing the equation in relation to previous formulations from radiative heat transfer and illumination engineering, as well as the equation of transfer, which dates back to the turn of the century. Finally, we shall see that the equation can be concisely formulated in terms of linear operators, which allows different solution techniques to be easily formulated and contrasted.

 
 Here is a list of both recommended and optional reading for the
 STC lecture "Theoretical Foundations of Rendering" by Jim Arvo,
 on Tuesday, November 7.
 
   1) James T. Kajiya, "The Rendering Equation,"
      Proceedings of SIGGRAPH `86, pages 143-150, August, 1986.
 
   2) James Arvo, Kenneth Torrance, and Brian Smits,
      "A Framework for the Analysis of Error in Global
      Illumination Algorithms," Proceedings of SIGGRAPH `94,
      pages 75-84, July, 1994.
 
   3) James T. Kajiya, "Radiometry and Photometry for Computer
      Graphics," In Advanced Topics in Ray Tracing, SIGGRAPH `90
      Course Notes, volume 24, Aug, 1990.
 
   4) James Arvo, "Transfer Equations in Global Illumination,"
      In Global Illumination, SIGGRAPH `93 Course Notes, volume 42,
      August 1993. (http://www.cs.caltech.edu/~arvo/papers.html)
 
   5) Andrew Glassner, "Principles of Digital Image Synthesis,"
      Morgan Kaufmann, New York, 1995.
 
 References 1) and 2) are recommended reading for the lecture.
 References 3) and 4) are suggested as optional reading.  Finally, the
 book by Glassner (reference 5) is an excellent reference that covers
 all of the relevant material in great depth.

NOV.14
A Case Study in Multi-Disciplinary Design
Rich Riesenfeld, Russ Fish, Curtis Keller

ABSTRACT: As part of the STC collaborative efforts, UNC and Utah folks participated in a collaborative, remote, rapid design and manufacturing project. The effort was aimed at designing and building the housing for a "See Through Video Head Mounted Display." The principles of the optical design were already developed. Within a three week period, an intense design dialogue and manufacturing program ensued. The design dialogue involved vigorous trade-offs among the representative areas, namely, optics, electrical engineering, mechanical engineering, and manufacturing. The resulting housing required an hierarchical manufacturing process that involved modeling and anticipating design issues that might arise within a three tiered fabrication process. This lecture is intended to review sone of the exciting and challenging problems in remote, collaborative design, and show a research environment that was created to support it.

NOV.21
Lifting, Wiring Diagrams, and the Construction of Subdivision Schemes
Peter Schröder

In this lecture I will give an introduction to the construction of curves through subdivision, i.e., through a sequence of successive refinements. The resulting algorithms are simple to implement, efficient, and connect intimately with wavelet transforms, making them very useful as a basic tool for a wealth of applications.

Classically, subdivision methods are built using Fourier tools. However these techniques do not generalize well to the irregular settings often encountered in practice. Instead I will show how subdivision algorithms, and their associated wavelet transforms, can be built using wiring diagrams. These diagrams are useful in the direct algorithmic description of a whole class of transforms and immediately translate into code or even hardware implementations. They are the graphical equivalent of an algebraic factoring technique, known as lifting.

In my lecture I will attempt to give meaning to the above and illustrate these ideas through deriving some neat ways to generate variationally optimal curves.

This is joint work with Leif Kobbelt of the University of Erlangen, Germany.

NOV.26
Simultaneous Local and Global Interactive Scientific Visualization
-
Chris Johnson

ABSTRACT: Currently, most graphical techniques emphasize either a global or local perspective when visualizing vector or scalar field data, yet ideally one wishes simultaneous access to both perspectives. The global perspective is required for navigation and development of an overall gestalt, while a local perspective is required for detailed information extraction. In this seminar, I will discuss new methods to augment global visual display techniques with local visualization methods. I will discuss these methods in the context of large-scale 3D computational field problems on structured as well as unstructured grids.

DEC.3
DIS & telecollaboration

Mike Macedonia

We're sorry...
No abstract is currently available.

DEC. 5
Information Visualization and the WWW

James Foley, MERL

Information Visualization involves displaying data and data relationships that are primarily non-numeric and non-geometric, in contrast to its cousin Scientific Data Visualization, in which most or all of the data is numeric and already has geometric position.

Information Visualization can be used with the World-Wide Web and similar information spaces in two ways: in navigating through the information space to understand how different pieces of information relate one to another, and in presenting the results of queries returned by search engines.

But, navigating the Web (cruising the infobahn) currently bears almost no resemblance to navigating the real world. The Web is primarily a linguistic world; the real world is visual. When I drive from city to city, I have a visual tool, the road map, to help me find my way, aided by linguistic aids (road signs) along the way. We describe research progress toward automatically creating visual road maps for the WWW.

Similarly, search results are generally presented as a textual list, ordered by relevance. The ordering does little to help the user understand why each document has been retrieved. We describe a visualization technique that may help users understand document relevance.

In closing, we discuss potential enhancements to the WWW infrastructure which would increase its ability to create and present visualizations.

Jan 23
Visualization of Multi-valued Volume Data

David Laidlaw

Multi-valued volume data has become more prevelant as data acquisition and computers have become faster and as limitations of scalar-valued measurements are reached. I'll begin with a video of some recently acquired multi-valued data from biology, medicine, and fluid mechanics, and talk about three specific problems that we've been addressing in that context.

The first problem is understanding developing mouse embryos through MR imaging. The MR images produced are vector valued, and extracting maximal information from the data is difficult. I'll present a goal-based technique for choosing linear combinations of the scalar elements making up the vector-valued data. The goals that we have implemented look for combinations that are useful for volume rendering; the technique can be easily extended to other goals.

The second problem is tissue classification of vector-valued data. This problem is a refinement of the first, and I will describe our algorithm, which is geared towards making geometric models of biological samples from MRI measurements.

The third problem is understanding the progression of a mouse analogue of multiple sclerosis. I will describe a new MR imaging protocol that we have been developing that measures the diffusion tensor field over a sample. This second-order tensor field shows significant geometric structure, some of which correlates well with histological sections of mice spinal cords, and the technique is much less invasive than sectioning. I'll try to give some geometric intuition about second-order diffusion tensors, and touch on the display of the data. The new imaging protocol also has more general applications in understanding musculature and central nervous system structures.

Jan 30
Inverse Image Warping

Gary Bishop

I will describe the preliminary work Leonard McMillan and I have been doing on an inverse implementation of McMillan's imaging warping equation. Previous implementations have been "forward" in that they have transformed pixels from the reference (input) image to the desired (output) image and then attempted to solve the reconstruction problem in the output image. Our inverse approach, like standard texture mapping, transforms the coordinates of output image pixels into the reference image and handles the sampling issues there. The inverse approach may allow us to think more clearly about the sampling and reconstruction issues in image-based rendering. It also makes it simple to eliminate some artifacts of image-based rendering by merging multiple reference images to form a single output image.

I will demonstrate an interactive JAVA based inverse warper.

February 18
The Topology of Implicit Surfaces

John C. Hart

Implicit surfaces offer a powerful geometric representation that simplifies the smooth blending of arbitrary shapes, but they also have historically been difficult to model interactively, render efficiently, and incorporate into production graphics systems. Converting an implicit surface into a polygonal mesh overcomes many of these problems, but polygonization sometimes does not accurately represent the structure of the implicit surface.

Treating an implicit surface as a gradient system allows theorems from catastrophe theory and Morse theory to accurately determine its topology by examining the critical values of the system. Techniques based on interval analysis find the critical points of the system, and dynamic constraints can be used to track these critical points as the implicit surface changes. Techniques for modifying the polygonization to accommodate changes in the implicit surface topology are given. These techniques are robust enough to guarantee the topology of an implicit surface polygonization, and are efficient enough to maintain this guarantee during interactive modeling. The impact of this work is a topologically-guaranteed polygonization technique, and the ability to directly and accurately manipulate polygonized implicit surfaces in real time.

This research is the topic of the Ph.D. dissertation of Bart Stander, a WSU grad student currently employed at Strata.

February 20
Simplifying Polygonal Models

Jon Cohen, Dinesh Manocha

UNC's Walkthrough research group has recently focused on fast ways to render complex environments, such as a submarine or tanker vessel. We shall discuss what role geometric simplification plays in the rendering of such scenes and also mention some other important techniques.

This talk will describe some current methods for simplifying polygonal models. It will focus on the Simplification Envelopes technique, which provides strict error bounds on the deviation of the simplified surface from the original surface. Such bounds allow the automatic selection of switching distances that determine when it is appropriate to view a simplified model.

The talk will also venture into the realm of some more recent work. This work deals with computing successive mappings as the surface is simplified. It uses techniques from computational geometric, such as linear programming, to solve some of the sub-problems that can arise for many simplification techniques. We are hopeful that such techniques may help us provide strict error bounds in other domains (e.g. color, normal, texture) as well.

Suggested reading:

Cohen, Jonathan, Amitabh Varshney, Dinesh Manocha, Greg Turk, Hans Weber, Pankaj Agarwal, Frederick Brooks, William Wright. "Simplification Envelopes". Proceedings of SIGGRAPH 96(New Orleans, LA, August 4-9, 1996). In Computer Graphics Proceedings, Annual Conference Series, 1996, ACM SIGGRAPH, pp. 119-128.

ftp://ftp.cs.unc.edu/pub/users/cohenj/Simplification/SimpEnv.SIG96.8bit_1 00dpi_i mages.ps.gz

Other usefule sources of information:

Simplification Survey by Carl Erikson
ftp://ftp.cs.unc.edu/pub/publications/techreports/96-016.ps.Z
Multi- resolution WWW page
http://www.cs.cmu.edu/afs/cs/user/garland/www/multires.html

February 25
Trends in PC Graphics

Turner Whitted

This talk describes the evolution of 3D graphics technology as it migrates to personal computers and describes the changes to the personal computer as it absorbs 3D graphics. The discussion is limited to personal computers which use the Intel CPU family and concentrates on hardware rather than the crazy quilt world of competing software standards. However, I intend to tackle at least one or two controversial questions about both hardware and software.

March 6
Hybrid Tracking

Greg Welch, Gary Bishop, Andrei State

Position and orientation tracking for virtual environments can be accomplished using a variety of mediums or modalities, e.g. electromagnetic waves, sound, or radio waves. However, when used alone each of these modalities suffers from inherent drawbacks. To address these drawbacks some researchers have sought to develop hybrid systems that leverage the strengths of distinct modalities to maintain more consistent performance throughout a working environment, across the frequency spectrum, and over a wide range of dynamics.

During this talk Welch and Bishop will explore the strengths and weaknesses of several common tracking modalities, present and discuss some past and present hybrid systems, and offer a glimpse into the direction of work under a newly funded joint effort between UNC, USC, and Hughes Research Labs. Andrei State will complete the talk with an overview of the UNC magnetic-optical hybrid system developed by our Ultrasound group.

April 3
Mark Mine

Working in a Virtual World

In this talk I will present ongoing research in immersive virtual- environment (IVE) interaction techniques. Included will be a discussion of several issues of working in a virtual world, examples of effective IVE interaction techniques and an overview of some principles of working in a virtual world. Examples will be given from the Chapel Hill Immersive Modeling Program, CHIMP.
For more information visit Mark Mine's home page:

http://www.cs.unc.edu/~mine

Also, a list of references on 3D interaction and immersive environments can be found at:
http://www. cs.unc.edu/~welch/comp239/pdf/mine_refs.pdf

This is the reference list from the course notes for Course 31, SIGGRAPH 96, "Practical 3D User Interface Design", taught by Daniel C. Robbins (Microsoft Corporation), Kevin Matthews (Artifice, Inc.), Roman Ormandy (Caligari Corporation), Narendra Varma (Microsoft Corporation), and Mark Mine (University of North Carolina).

April 8
Greg Welch, Gary Bishop

The Kalman Filter

The Gaussian concept of estimation by least squares, originally stimulated by astronomical studies, has provided the basis for a number of estimation theories and techniques during the ensuing 170 years-probably none as useful in terms of today's requirements as the Kalman filter.

H. W. Sorenson
University of California, San Diego
IEEE Spectrum, vol. 7, July 1970

In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear filtering problem [Kalman60]. Since that time, due in large part to advances in digital computing, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation.

The Kalman filter is a set of mathematical equations that provides an efficient computational (recursive) solution of the least-squares method. The filter is very powerful in several aspects: it supports estimations of past, present, and even future states, and it can do so even when the precise nature of the modeled system is unknown.

For this STC lecture our goal is to provide a practical one-hour introduction to the Kalman filter. This includes the following:

  • some intuition about the filter operation,
  • the introduction of the basic discrete Kalman filter equations,
  • a relatively simple (tangible) example with real numbers & results,
  • highlights of some applications, and
  • a Kalman filter handout and pointers to more detailed reading.

A very "friendly" introduction to the general idea of the Kalman filter can be found in Chapter 1 of [Maybeck79], while a more complete introductory discussion can be found in [Sorenson70], which also contains some interesting historical narrative. More extensive references include [Gelb74], [Maybeck79], [Lewis86], [Brown92], and [Jacobs93].

References

Kalman60:
Kalman, R. E. 1960. "A New Approach to Linear Filtering and Prediction Problems," Transaction of the ASME-Journal of Basic Engineering, pp. 35-45 (March 1960).
Maybeck79:
Maybeck, Peter S. 1979. Stochastic Models, Estimation, and Control, Volume 1, Academic Press, Inc.
Sorenson70:
Sorenson, H. W. 1970. "Least-Squares estimation: from Gauss to Kalman," IEEE Spectrum, vol.7, pp. 63-68, July 1970.
Gelb74:
Gelb, A. 1974. Applied Optimal Estimation, MIT Press, Cambridge, MA.
Lewis86:
Lewis, Richard. 1986. Optimal Estimation with an Introduction to Stochastic Control Theory, John Wiley & Sons, Inc.
Brown92:
Brown, R. G. and P. Y. C. Hwang. 1992. Introduction to Random Signals and Applied Kalman Filtering, Second Edition, John Wiley & Sons, Inc.

April 10
Elaine Cohen and Rich Rienenfeld

Bases, Representations, and Design Methods in Mathematical Modeling

Abstract to Be Announced

April 15
Andries van Dam and Sascha Becker

Virtual Reality Modeling Language in Context

The purpose of this talk is to introduce the history, technology, and applications of VRML, the Virtual Reality Modeling Language. VRML arose as an Internet standard for 3D graphics in 1995; over the last two years it has spawned a fledging industry, complete with a proposed ISO standard and a consortium to promote it. VMRL 2.0, the current version, is a file format for 3D objects and scenes with behaviors. A VMRL scene is described by a hierarchical scene graph, and a simple event model drives behaviors in the scene. Simple interactions are supported directly in VMRL, and more complex interactions or behaviors may be built with Java or JavaScript. We will compare VRML to several modern graphics APIs, including OpenGL and Java3D. We will then survey some of the tools available for browsing and authoring VRML worlds or developing VRML applications. Finally we will present resources for more information on VRML.

April 17
Density Estimation Techniques for Global Illumination

Bruce Walter

This talk will describe the density estimation framework developed at Cornell to produce view-independent global illumination solutions that are suitable for interactive walkthroughs. The method consists of three pieces: a particle tracing phase probabilistically simulates the flow of light, a density estimation phase reconstructs the lighting function from the particle data, and a decimation phase compacts the results for faster display and reduced storage.

The first part of the talk will describe some of the advantages of the framework including: scalability, natural parallelism, and robustness, and some of its current drawbacks. The second part will concentrate on current and future research on the density estimation phase. Density estimation is a standard statistical problem and some results from the statistics literature will be discussed along with ideas for extending it to handle wider classes of models and for extracting more information from the particle data.

Publications:

Peter Shirley, Bretton Wade, David Zareski, Philip Hubbard, Bruce Walter and Donald Greenberg,
"Global illumination via density estimation", In Proceedings of the Sixth Eurographics Workshop on Rendering, June 1995
Bruce Walter, Philip Hubbard, Peter Shirley, and Donald Greenberg,
"Global illumination using local linear density estimation", In ACM Transaction on Graphics, To appear summer 1997
Home Research Outreach Televideo Admin Education