(1) Creating a virtual environment. This isn't exactly data visualization, but we're doing experiments on human navigation ("cognitive maps") in VR, using the 40 x 40 ft. VENLab with an HMD, tracker, and SGI graphics. It may be possible for a group to create a virtual environment for such an experiment. For example, we're beginning to build a "Hundred acre wood" environment, and pieces of that may be doable. We use WorldToolKit as a platform, so there would be a learning curve involved.
(2) Prism views. A smaller project is to simulate what the world looks like through a distorting prism. This would involve learning a bit of optics (e.g. the transfer function for a wedge prism) and computing the view through it for a moving observer. My interest is in calculating and displaying the optic flow field (a vector field) for different environmentals structures with different types of distortions.
Evaluate, via a user study, how various kinds of textures can be used to encode multi-valued data.
Something motivated by one of the research proposals we have read (or one that we haven't read):
Develop anatomical visualization for brain tumor pre-surgical planning.
Evaluate whether visualizing neural diffusion rate information is useful.
Build physical prototypes for true 3D visualization.
Represent uncertainty visually.
I'm looking for ways to show viewers what's happening at a joint (say the elbow) without plotting elbow angle vs time in a regular graph. I'd prefer to show a viewer the pigeon skeleton in a 3/4 perspective view, but have more specific data available as the wings are flapping.
My preliminary ideas involved coloring the bones around the joint according to angle (some sort of spectrum) or to velocity (perhaps like a Doppler shift?). I want to avoid clutter, so adding extra icons, vectors, etc. are less welcomed than something that will fit into the existing geometry of the model.
In the end I envision a skeletal model with real movement, within which a viewer can access more detailed data if they wish. Patterns of motion among joints might be more visible, but not so glaring as to be distracting from the wingbeat itself. I'm also hoping that each frame, if viewed as a still, could portray the motion in progress, not just a static position.
Can an application for exposure therapy for patients with specific kinds of phobias (flying, heights, contamination, crowds) be made more interactive (never seen a Cave, but that seems to be the idea), so that the intensity of exposure and the virtual environments presented could be varied to get progressively worse. Some such VR systems already exist, but I don't think they're that sophisticated. A really good one could be marketed.
1) I have a phigs program that displays earthquake simulation results along a fault in 3D. Converting this to a world toolkit (WTK) program and comparing effectiveness in various 3D environments would be useful.
2) The current display of information along a fault is limited to a single scalar value at each point of the fault surface. Extend that display to include more variables simultaneously.
3) Display time-variation of data on the fault surface, perhaps using time as a third dimension to create a volume.
Run different numerical simulations on a fault and compare the results visually to understand the limitations of the various methods and the significance of the differences
The motivating problem for me is to "look" at 4x4 patches of images and classify them. A 4x4 patch is given by 16 pixels so defines a point in R^16. We can easily generate gigabytes of such patches, hence pts in R^16. But a gigabyte is still small compared to the elbow room in R^16. We have, of course, some ideas about what the cloud of these samples looks like but nothing really good yet.
1. Visualization of similarity spaces: Take a set of images. Compute similarity over some metric. Generate visual representation of the space. Have sliders to change parameters, e.g., spatial filtering of image (might get more or less similar). And so on.
2. Visualization of brain imaging data collected longitudinally. We have time-series data across multiple slices in the brain. In addition we have behavioral data associated with the slices. Again, an interactive visualization tool would be very useful.
Visualize ground water dynamics.
1. Better interfaces for choosing and modifying colors in graphics applications
2. Designing expert palettes and their interfaces
3. Color palette organization
4. Visualization of color spaces
Here's a more specific project idea:
The Problem: The effects of different combinations of colors can be somewhat successfully described and predicted by color theory but 1. there are many dissenting opinions and 2. the theory tends to breakdown for real-life compositions (that have more than 2-5 colors in simple juxtaposed shapes).
A possible solution using visualization techniques: If the preferences for complex color interactions could be more easily studied, a theory of such interactions could possibly be developed. We propose plotting color-averaged areas of famous works (so that the preference factor is widely agreed upon) in a five-dimensional space -- RGB or HSV and x and y -- and looking for clusters. One of the chief challenges would be finding such clusters (if they exist) in a higher dimensional space. Some of it could be done analytically but we'd like to have a visual means of exploration too.
A possible solution using computational modeling: For the mathematically inclined, it is possible to model an image by dividing it into a n x n color-averaged area grid and think about each color area as one location in an n x n-dimensional space. One would plot many images this way and then rate them on a preference scale. Each image is then expressed as a long equation in which each location is a variable and has a co-efficient. Using the preference results, one can solve a huge set of simultaneous equations to solve for the co-efficients.
Height functions are a very common form of data to deal with in various application domains: geography, hydrography, surface material properties, single view range imagery, etc.
Different approaches exist to map such graphs to feature maps, based on some geometric features of the height functions. These range from local (differential) to global (regional) approaches to capture features like: ridge/valley lines, to regional segmentation and watersheds.
Each method has its pros and cons and its favored applications.
The first project I have in mind is to implement a number of these approaches, from local to regional and find good ways to visualize the results and compare these.
The second project - might be included in the first - is to do the same kind of work for a method I developed a few years ago. It is derived from an old concept in geometry: the Indicatrix (of Dupin). Basically you take slice of the graph of varying size (notion of scale space introduced) and look at the local shape of the cut. For continuous (Morse) functions you have a small set of possibilities of interest. The interest of this method is that it alloes to merge local and regional methods in one framework. It also relates to topography in a more "comprehensive" way by defining valleys and ridges as elongated regions rather than lines on a surface. Flow fields can be derived to illustrate how the different labels (cut types) interact on any given height function. One difficulty is how to express the additional dimension of scale.
The above two methods are particularly relevant to vision (and geography, etc.).
The third project is to use the same approaches as above in the context of 3D surface this time. A height function can be introduced either regionally (view points) of locally (normal field).
I can describe a 4th project, if you need more ideas ... It would be about the volumetic and surfacic flowfields involved in the computation of 3D skeletons. But since I am still developping the algorithms to retrieve these, this might prove too early.
The algorithms for projects 1 and 2 exist though. For 1 there are many recent papers on the topic. For 2 I have my own set of algorithms (implemented under the Khoros environement, in C) and results.
I'll be happy to discuss any of the above in more details.
New ideas for 2000
You may want to add to that paragraph, as a source of data and interest in spatial geology, the use of new (more accurate) topographical data from Mars. The extraction of significant rivulets and their viz. in the context of the planet neighborhood, may help decide ... if there is water on the red planet.
(1) Interactive 2D CAD:
Explore the use of 2D skeletal graphs a la Kimia to design 2D free-form shapes.
We have conducted some initial work along these lines- a project called Genesys.
This can be combined with existing techniques in Sketch/JOT.
For example, a drawing gesture defines the graph loci.
Another gesture (hand pressure ?) defines the max ball varying radius.
This can be used to generate canal (tubular) surfaces interactively.
Other more complex surfaces can have their skeleton first computed
and presented to the user for further modification. The interface is a big issue here.
The manipulation of tree-like graph too.
Sci.Viz can be explored by drawing 2D shapes over a background or workspace
textured with images of complex geometry. E.g. extraction of significant pathways
on radiological (2D) imagery.
(2) Process-grammar for 2D shapes.
Michael Leyton (Rutgers U.) introduced a process grammar more than 10 years ago.
http://www.lems.brown.edu/vision/people/leymarie/Refs/Perception/LeytonM.html
2D blooby shapes, like live cells using pseudopods, can have their dynamics modeled
via this grammar.
The grammar is based on the symmetry-curvature duality principle which
relates certain extrema of curvature to 2D skeletal graph branch ends.
The goal here would be to explore the predictive power of the grammar for such
natural shapes. This can then be extended to growing/evolving biological shapes
to study shape properties through time.
The viz. of this complex data - initially we have the trace of outlines through time -
in a more compact and significant way - via the features of the process grammar
of Leyton - may lead to significant advances in understanding these important
biological processes.
I am in touch with Leyton.
(3) Interactive 3D CAD & Reverse engineering of free-form shapes
Do what is proposed in (1) above, but in 3D.
Sketch/JOT provides a starting point.
Use of single snapshots via perspective has already been explored
using Sketch (cf. myself and Loring Holden).
Use 3D skeletal graphs to either directly input/overlay
tubular branching surfaces,
http://www.lems.brown.edu/vision/people/leymarie/Notes/CurvSurf/Surfaces.html
or via an image analysis of 3D input scans
(like we get at the SHAPE Lab), to obtain a graph of more complex shapes.
http://www.lems.brown.edu/vision/extra/SHAPE/
Then explore the use of Sci.Viz. to analyze these constructs
(e.g. to each skeletal sheet and curve is associated a flow field,
2D and 1D, respectively, provided
by the change in the radius function of maximal balls).
By interactively changing these flows, show the deformation in shape
of the original object.
Study the nature of the simplifications introduced by pruning the skeletal graph.
Perform similar applications to (1), like extracting lung or coronary pathways,
but in the basis of 3D medical data.
(4) STITCH visualization
Got to the web site to see what this project is about:
http://www.cs.brown.edu/research/graphics/research/sciviz/archaeology/stitch/
But, basically, we now have many different geometrical constructs we can extract
from the 3D scans of pottery sherds,
like global axes and curvatures:
http://www.lems.brown.edu/vision/extra/SHAPE/Presentations/Xevi00/XeviDrew00Aug23.html
http://www.lems.brown.edu/vision/extra/SHAPE/Presentations/YanCao00Aug/YanCao00Aug9.html
skeletal graphs:
http://www.lems.brown.edu/vision/extra/SHAPE/Presentations/Leymarie00Aug16/Leymarie00Aug16.html
curvature maps & interactive ridge/valley following:
http://www.cs.brown.edu/research/graphics/research/sciviz/archaeology/stitch/live-wire.html
How to make sense of all these (3D) geometric features?
Also, it would be interesting to interactively reconstruct a pot, from the 3D scanned sherds
and using force fields (attractive and repulsive) based on the above geometric construct.
E.g. curvature flow fields in the vicinity of breaks could guide the reconstruction
(initially) interactively.
(5) Visualization of space curves, surface intersections and central loci
Consider the computation and visualization of bisectors, i.e., the surfaces
at mid-distance between surface patches (by pair, triplet, quadruplet ...).
Starting with even the simplest patches: triangles or polygons in 3D space,
the bisectors are general quadrics, i.e., trivariate 2nd degree implicit polynomial surfaces (IPS).
The intersection of these quadrics generate space curves: in general
trivariate quartics. We may call these tri-sectors by analogy.
In order to compute such space curves, one must rely on numerical computations
in practice for the non-degenerate cases (i.e., conics and lines).
There are many ways to do this:
(i) simplify the system of implicit polynomials
describing the intersecting bisectors, on the basis of the theory of resultants
(a.k.a. the PhD of J. Canny).
(ii) numerically solve the system of equations as is, assuming a good initial guess.
(iii) isolate an initial point on the curve, by introducing another set of equations,
e.g. via Lagrange multipliers (say we minimize distance), solve for this point,
and then trace the curve in two opposite directions, relying on differential
geometry.
(iv) partition space in an octree and solve the equations at increasing resolution,
by checking intercepts with lattice faces.
Depending on the initial conditions and the numerical stability (or lack thereof)
of the method used/implemented, we will have different behaviors.
Thus, one goal of this study is to visualize in clever ways the numerical
stability and accuracy of algorithms for the intersection of IPS.
Another is to visualize the special 3D geometry of central loci, such as the initial
points in (iii) above.
Now, we started with the "simple" case on polygons - which by the way need to be broken
down in vertices, edges, and planar sections, giving 6 possible bisectors/quadrics.
Once we tackle the above, we can turn our attention to a larger set of input objects,
by adding :
- canal surfaces, including Dupin's cyclides (whose bisectors degenerate to space curves)
- the extension of our "polygonal elements": spheres & cylinders of varying radii.
- other natural quadrics: cones
- cubics
- etc.
Even when only considering the polygons --> quadrics case, this is both challenging and useful
enough. Indeed these represent the main class of CAD primitives of use today.
The intersection of these objects is necessary to be computed precisely, for example
to feed a Numerical Machining tool.
Blending (soft/smooth joins) is another area to be explored here,
which involves similar techniques as discussed above.
This project would likely be valuable to people with interest in:
Applied Maths, Comp. Geometry, CAD & CAGD,
Molecular chemistry, and shape modeling in general.
(6) Art & Visual perception
The objective here would be to
evaluate the work of Rudolph Arneim et al :
http://www.lems.brown.edu/vision/people/leymarie/Refs/VisualArt/General.html
Arneim introduces the concept of Psychological forces
in understanding our visual percepts and applies
this to the study of visual arts.
This is reminiscent of the Gestalt school.
Quoting Arneim:
"What a person perceives is not only an arrangement of objects,
colors, shapes, movements and sizes, but, perhaps first of all, an
{\em interplay of directed tensions}. The latter are inherent in
any percept. Because they have magnitude and direction they are
called {\em psychological forces}."
These force fields are described (in his books and in others'),
and one goal would be to implement a set of these ideas and visualize
them on paintings, and eventually sculptures, illusory outlines, etc.
One could go further and modify a piece of art by acting on these force field
patterns, creating more visual tension or reducing it.
The potential for a better understanding of what the artist wanted to convey
is palpable.
This project would be of interest to students in Arts,
Comp. Graphics, Cog. Sci., Comp. Vision (at least).