Fundamental problems in electrocardiology (ECG) and electroencephalography (EEG) can be characterized by an inverse problem [1, 2, 3]: given a subset of electrostatic voltages measured on the surface of the torso (scalp) and the geometry and conductivity properties within the body (head), calculate the electric current vectors and voltage fields within the heart (cerebrum). Mathematically the generalized ECG and EEG problems can be stated as solving Poisson's equation of electrical conduction for the primary current sources or Laplace's equation for the voltage field on the surface of the heart (cerebrum). The resulting problem is mathematically ill-posed, i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, cardiologists and neurologists would gain non-invasive access to patient-specific heart and cortical activity.
We proposed to examine the feasibility of applying advanced multiresolution geometric modeling, numerical simulation, and visualization approaches to advance the state-of-the-art of algorithms in this area. To this end we have had three face-to-face meetings in Utah since the second half of 1996. These were devoted to facilitate understanding of each other's capabilities and tools already in place. We have been able to bring Dr. Michael Holst from the Caltech Applied Math Department into this project. Steps have been taken to exchange software infrastructure, bringing the SCIRun computational steering environment from Utah to Caltech and integrating Dr. Holst's multigrid solver technology into the SCIRun environment.
ln the following we detail activities at the two sites relevant for the common research agenda.
As part of our overall efforts to build a multiresolution mesh technology base we have constructed libmesh, which manages the discrete and continuous aspects of multiresolution meshes. It supports the entire class of l-ring subdivision schemes and a fast coarsification strategy. In concert with this effort we are currently building a multiresolution mesh constructor, which uses volumetric data as input and directly generates multiresolution meshes. This avoids the usual remeshing step when using such techniques as marching cubes to generate boundary representations of relevant regions (i.e., lung tissue, the heart, etc.) The resulting meshes have a topological structure which makes them immediately useful to multigrid numerical solvers, aside from the other benefits of LOD rendering for fast visualization and compression opportunities for transfer between remote collaboratories.
We are currently investigating possible approaches to carry the subdivision approach over into the volumetric domain of hierarchical tetrahedral meshes, but do not yet have any results in this direction.
The above work has been (is) performed by Caltech undergraduate student Emil Praun, graduate student Denis Zorin, and PostDoc Michael Holst. Libmesh is described in, and the basis for, a paper submitted to Siggraph 97 ("Interactive Multiresolution Mesh Editing"). I anticipate that the new multresolution mesh construction algorithm based on volumetric input data will lead to a publishable paper this summer.
Over the coming year I anticipate the following activities which will be relevant for our project. This summer we are builing the first Caltech Responsive Workbench, which will provide an excellent visualization environment. I am in the process of hiring 4 undergraduates to help bring up the workbench itself and our multiresolution mesh infrastructure. If my graduate student recruiting efforts are successful I expect to have a graduate student working on our project starting this Fall.
Recently, adaptive methods for both solving PDEs and performing large-scale scientific visualization have been an important part of scientific computing research. Adaptive methods seek to reduce the error due to geometric discretization when using a numerical procedure for solving PDEs and allowing one to control the level of discretization near regions of geometric complexity and/or regions of change of the physical parameters. Adaptive finite difference, finite element, boundary element, and multigrid methods now exist in the literature. However, finding implementable efficient and provably accurate adaptive methods is still an open problem. While theoretical work has shown that multiresolution methods lead to asymptotically better algorithms for a wide class of operators no practical techniques are currently known to take advantage of these results in the "messy" settings of real world applications. We are currently incorporating initial results from Caltech's efforts in multiresolution into our computational steering environment [4, 6, 5]. Furthermore, we are extending these techniques to be applied to large-scale, three-dimensional, problems on unstructured grids.
As mentioned above, the ECG and EEG problems are ill-posed. Using equivalent integral equation formulations (using Green's functions) for the associated PDEs yields operator equations which are generally better conditioned. The fact that they are dense rather than sparse has prevented their widespread use. Using multiresolution methods, however, these systems can be turned into sparse problems once again making the associated iterative solver methods competitive. Many questions remain open. In particular, how does one implement the necessary regularization techniques within a multiresolution framework so as to produce a better conditioned system? We will investigate the conditioning of these systems and the accuracy of the solutions which an be achieved in practice with these methods.
Currently many scientific visualization methods rely on interpolation properties within the computational grids to compute the resulting visualization, such as vector streamlines or isosurfaces. Depending on the underlying computational mesh, there may not be the necessary mathematical support to accurately compute the corresponding visual quantity. At the other end of the spectrum we find that for very large data sets it is often difficult to find "interesting" events or features. Multiresolution methods have an ability to zoom in on detailed properties and thus provide an ideal analysis tool in the toolbox of visualization methods. We will investigate using multiresolution methods to accurately perform visualzation computations and compare them to exisiting methods and the use of the analysis features to guide the user to interesting features.
At Utah over the coming year, we would like to continue to have Peter-Pike Sloan work on the incorporation of Caltech's multiresolution codes within the SCIRun steering environment. Furthermore, Peter Sloan will be continuing his investigations of using multiresolution methods for interactive visualization for large-scale three-dirnensional problems on unstructured grids. I will be continuing to investigate the issue of coupling multiresolution methods to regularization schemes for ill-posed inverse problems.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Home | Research | Outreach | Televideo | Admin | Education |