Many modeling operators in today's literature have been presented on simplified surfaces, for the most part surfaces that are initially flat and uniformly parametrized. Applying such an operator later in the design process, say after a sequence of modeling operations, may produce quite unintuitive and unintended effects. We are investigating methods, including reparametrization and nonlinear optimization linked with surface analysis and modeling intent, to overcome these difficulties.
We expect to drive further research by carrying out the modeling of complex objects like the HMD. We expect to continue research into design and model analysis for manufacturing within the context of this project. Depending on the particular modeling projects (which would also require manufacturing), we will select from among model analyses for new processes and also for multistage processes. An example of this might be process planning molds for the HMD, with tight tolerances and multiple stages.
Theoretic Analysis of Ck-Smoothness of Subdivision on Arbitrary Meshes
Volume Modeling and 3D Morphing
We plan to combine the mathematics of the two systems in a structured model that preserves the generality of the dynamic constraint method while allowing the improved efficiency of generalized coordinates when applicable. Other methods can be incorporated in the future if they display a comparative advantage. The automation of the model allows specification of constraints in terms of the desired behavior, independent of the underlying mechanism. The complexity of switching between multiple models can then be hidden from the user, which will yield a conceptually simpler interface.
State Machine for Piecewise Modeling
One major goal of this long-term research is ultimately to reduce the computational expense of global illumination algorithms. An inherent cause of the slowness of these algorithms is that too much time is spent computing scene features that are measurably unimportant and perceptually below the visible threshold of the average human observer. Algorithms can be substantially accelerated or computed progressively if we can develop perceptually based error metrics that correctly predict the visibility of scene features. The establishment of these techniques will not only allow proper tone mappings, but provide the feedback loop for modifying the physical computations.
We believe that by separating the physically based computations from the perceptually based image creation and by experimentally comparing results at each phase of the process, we can ultimately produce images that are visually and measurably indistinct from real-world images.
In the area of traditional depth-extraction techniques, we will use uncertainty measures when displaying the depth or geometry information. In general, we feel that the confidence measures can be used to produce depth data with smooth transitions as opposed to data with jumps from discrete differences in neighboring data. For example, depth data could be rendered with varying degrees of transparency corresponding to the confidence of each sample. Such experiments may offer insight into the use of sparse depth data as an aid to modern image-based rendering approaches. We anticipate that the PixelFlow image generation system will be online by the summer of 1997. PixelFlow should provide an uniquely powerful platform for experimenting with various reconstruction-related rendering techniques.
2.3.1 Extending the Sketch System
Our research in desktop interaction will focus primarily on extending the Sketch system. Alias/Wavefront and Autodesk, as well as a number of other makers of 3D modeling software, have expressed strong interest in incorporating notions of the Sketch system into their products. In particular, Alias and Autodesk are starting collaborations with the Center and are providing hardware, users, and 3D modeling frameworks to more easily incorporate our techniques in their future products. These relationships will help ground our research in practical problems and provide us with a base of industrial users for usability testing.
2.3.2 Haptic Feedback
We want to explore metaphors for haptic user interfaces; in particular, we are not interested in literal simulation of physical environments, as in most haptic demos, or in merely mirroring the physical world. Rather, we are interested primarily in how to present and manipulate features that do not have a unique, intuitive, natural mapping into a haptic form. Simple examples include guiding the user's motion, as in the physical snap-to-grid work done recently in collaboration between Brown and UNC, and gravity relief to alleviate the strain of keeping one's hand in the air for a long time. We believe that the guidance idea in particular can be extended into a very general and useful tool.
2.3.3 Interaction for Direct Manipulation of Formal Systems (The Smartboard Project) (Brown-Caltech)
Future efforts will include interactive methods for constructing proofs and performing experiments in other branches of mathematics, such as analysis, topology, differential geometry, and combinatorics, and in areas of computer science such as automata theory. This will entail the combined use of hand gestures and speech, as well as more traditional input mechanisms such as pointing devices.
2.4.1 Image Display Technologies
We will continue to develop the best possible head-mounted and fixed display systems. We see HMD work progressing as a collaboration between HMD and optics researchers at UNC and design and modeling researchers at Utah. We also anticipate making use of modern optical techniques, possibly via contract (as in the past) with an optical engineering firm. We are also planning to work on high-resolution, wide-field-of-view, immersive fixed displays that make use of minimal infrastructure. One of our goals is to make these fixed displays as convenient and high-resolution as looking through regular eyeglasses.
2.4.2 Time-Critical Frameworks
We will adapt a degradable terrain-rendering algorithm and a varying-level-of-detail generating algorithm to fit into the framework. While it is possible to use conventional performance prediction (e.g., based on feedback loops), algorithm-specific predictors are more accurate and at the same time straight-forward to devise. Similarly, these algorithms require scheduling algorithms that take advantage of algorithm-specific features. Another approach we will explore is that of authoring the entire virtual environment as a single object with a procedurally generated, multiresolution representation. The environment will contain author-supplied information on the application-defined importance of its components. When a user interacts with a scene, it will be simulated and rendered as a function of the user's viewpoint, using lower resolution for more distant or less important components. Using a procedural representation of the scene components will let it to contain an arbitrary amount of detail that is generated only when called for by the scheduler.
2.4.3 Hardware Architectures
First Light with Analog VLSI
We will test our first full analog architecture for making models and displaying images with analog approaches.
2.5 Scientific Visualization
We are developing new scientific visualization techniques for investigating vector-valued and tensor-valued data. This includes acquisition and extraction of diffusion-tensor-valued MR images, vector-valued flow MR images, and scalar and vector-valued laser-images of turbulent flow. We are also continuing our work on new types of partial-volume tissue classification (see Figure 4).
2.5.1 Wavelet Methods for ECG/EEG Visualization and Computational Modeling (Caltech-Utah)
We will develop the wavelet methods needed to solve inverse EEG and ECG problems. For the EEG, these methods will be used to solve Poisson's equation of electrical conduction for the primary current sources in the cerebrum (specifically in the temporal lobe), and Laplace's equation for the voltage field on the surface of the heart (epicardium). These methods include such techniques as variational subdivision schemes, spherical wavelet processing of space physics data, construction of multiresolution meshes directly from volume densities, and construction of subdivision surface wavelets.
We will continue to pursue our goal of giving distance collaborators a compelling sense of common presence in a virtual space, in the limit ``making it as real as being there.'' We are also working to leverage aspects of virtual environments that go beyond real-world simulation, providing added value unique to the virtual environment. For example, we may violate the laws of gravity and let annotations hang in space near a model, or we may share another user's viewpoint, in effect seeing through another user's eyes. Such techniques not only make telecollaboration a more powerful tool but can improve local computer-supported collaborative work as well.