CS237 Project Idea List

These ideas, most less-than-completely-formed-and-described, were suggested by various researchers around campus. If you are interested in following them up for your project, please contact the provider of the idea or chat with me (dhl) for more information.

From Bill_Warren@brown.edu, Cognitive and Linguistic Sciences

Two possibilities come to mind:

(1) Creating a virtual environment. This isn't exactly data visualization, but we're doing experiments on human navigation ("cognitive maps") in VR, using the 40 x 40 ft. VENLab with an HMD, tracker, and SGI graphics. It may be possible for a group to create a virtual environment for such an experiment. For example, we're beginning to build a "Hundred acre wood" environment, and pieces of that may be doable. We use WorldToolKit as a platform, so there would be a learning curve involved.

(2) Prism views. A smaller project is to simulate what the world looks like through a distorting prism. This would involve learning a bit of optics (e.g. the transfer function for a wedge prism) and computing the view through it for a moving observer. My interest is in calculating and displaying the optic flow field (a vector field) for different environmentals structures with different types of distortions.

From David Laidlaw (dhl@cs) Computer Science

I'll describe a number of possible areas from which projects could be defined. In almost all of the areas below, initial steps have been taken to get needed data and to scope out a larger research agenda.

Evaluate, via a user study, how various kinds of textures can be used to encode multi-valued data.

Something motivated by one of the research proposals we have read (or one that we haven't read):

Develop anatomical visualization for brain tumor pre-surgical planning.

Evaluate whether visualizing neural diffusion rate information is useful.

Build physical prototypes for true 3D visualization.

Represent uncertainty visually.

From Stephen_Gatesy@Brown.edu, Evolutionary Biology

The best idea that I could come up with for a short project involved visualizing joint position (angle) and rotation (speed and direction of angular change) in my pigeon animations.

I'm looking for ways to show viewers what's happening at a joint (say the elbow) without plotting elbow angle vs time in a regular graph. I'd prefer to show a viewer the pigeon skeleton in a 3/4 perspective view, but have more specific data available as the wings are flapping.

My preliminary ideas involved coloring the bones around the joint according to angle (some sort of spectrum) or to velocity (perhaps like a Doppler shift?). I want to avoid clutter, so adding extra icons, vectors, etc. are less welcomed than something that will fit into the existing geometry of the model.

In the end I envision a skeletal model with real movement, within which a viewer can access more detailed data if they wish. Patterns of motion among joints might be more visible, but not so glaring as to be distracting from the wingbeat itself. I'm also hoping that each frame, if viewed as a still, could portray the motion in progress, not just a static position.

From Mijail_Serruya@Brown.edu and John_Donoghue@Brown.edu, Neuroscience

We are recording neural signals from patients that want to be able to control prosthetic devices. We would like to experiment with what patients are able to learn to do with these signals when they are used to drive various output "devices." For example, we might have the signal position the cursor on a computer scrieen continuously and/or produce other inputs, like mouse clicks or keyboard events.

From Peter_Richardson@Brown.edu, Engineering

Residence time is a term that describes how long fluid stays near a particular location. Reaction with a blood vessel wall (i.e. with the endothelial cells that cover it) may be affected by the spatial distribution of residence times for particles carried in the flow. How can this be visualized to help pathologists and pharmacologists make comparisons with local effects at vessel walls visualized with stains, or labelled biochemical probes?

From Ben Greenberg, bdg@butler.org, Nuerosurgery

Put diffusion and structural MRI representations together. For transcranial magnetic stimulation research, you'd ultimately want simultaneous representations of brain structure, connectivity, intensity-dependent imposed magnetic fields, and ideally, EEG or evoked potentials as they change intensity and location over milliseconds post stimulation. Could the scale of the events visualized also change (from cellular to local network to multiple large networks?) [this one is also related to the pre-operative surgery idea -- David]

Can an application for exposure therapy for patients with specific kinds of phobias (flying, heights, contamination, crowds) be made more interactive (never seen a Cave, but that seems to be the idea), so that the intensity of exposure and the virtual environments presented could be varied to get progressively worse. Some such VR systems already exist, but I don't think they're that sophisticated. A really good one could be marketed.

From Terry_Tullis@brown.edu, Geology

Here are a couple of possible project ideas. They are all related to looking at earthquake simulation results.

1) I have a phigs program that displays earthquake simulation results along a fault in 3D. Converting this to a world toolkit (WTK) program and comparing effectiveness in various 3D environments would be useful.

2) The current display of information along a fault is limited to a single scalar value at each point of the fault surface. Extend that display to include more variables simultaneously.

3) Display time-variation of data on the fault surface, perhaps using time as a third dimension to create a volume.

Run different numerical simulations on a fault and compare the results visually to understand the limitations of the various methods and the significance of the differences

From David_Mumford@Brown.edu, Applied Math

Are you teaching anything about tools for seeking structure in high dimensional data sets, i.e. given a billion points in 10-dimensional space, figuring out if they lie, more or less, on some curved surface in R^10? I know there has been some work on this, but I'm not too familiar with it. If so, I might have a project.

The motivating problem for me is to "look" at 4x4 patches of images and classify them. A 4x4 patch is given by 16 pixels so defines a point in R^16. We can easily generate gigabytes of such patches, hence pts in R^16. But a gigabyte is still small compared to the elbow room in R^16. We have, of course, some ideas about what the cloud of these samples looks like but nothing really good yet.

From Michael_Tarr@brown.edu, Cognitive and Linguistic Sciences:

1. Visualization of similarity spaces: Take a set of images. Compute similarity over some metric. Generate visual representation of the space. Have sliders to change parameters, e.g., spatial filtering of image (might get more or less similar). And so on.

2. Visualization of brain imaging data collected longitudinally. We have time-series data across multiple slices in the brain. In addition we have behavioral data associated with the slices. Again, an interactive visualization tool would be very useful.

From Eileen Vote (Eileen_Vote@brown.edu) Archaeology

Viewing Artifact Data in a Dynamic Way: I've got about 150,000 artifacts plotted in the GIS application, ArcView, over in the geology lab in MacMillan Hall. You might remember that I sent you an early image [and a second image] of what the data looked like plotted with one artifact type, Pottery. Unfortunately, the application does not allow me to show more than one artifact type plotted at once so the analysis possibilities are limited. Students could work on the problem of how to represent multiple types of artifacts at the same time in a 3D format. I have all the data (100 layers X 15 artifact types) in table format so it would be pretty easy for them to work with it.

From George_Karniadakis@brown.edu

Perhaps you can incorporate ideas about mesh generation and quality inspection in the Cave. One can color elements according to Jacobians, move nodes by hand, etc.

From Gregory Jay, GJay@Lifespan.org, Rhode Island Hospital Orthopaedics

A project for those interested in modeling problems in biomedical engineering....Consider pulsus paradoxus, which is the decrease in blood pressure with inspiration. This is a pathological vital sign which is very important in evaluating patents wth asthma, croup, pneumothorax and pericardial tamponade. An overly simplified representation of blood pressure could be a sine wave. Imagine superimposed on that signal is the sine function with something like .5Sinf/4 which means that there is a periodic decrease in amplitude at a smaller frequency (ie respiration). In reality the blood pressure waveform is more complex than this. Modeling and analysis of real blood pressure waveforms could be accomplished with fast orthogonal search which provides an approximation to the Fourier transform. I am also ineterested in determining a molecular model for the molecule I am studying. I would like to talk to you about this va email to determine your interest. It is a big molecule- a polyprotein with up to 12 distinct regions. All included the mw would be up to 400 kDa. It is also glycosylated. I last looked into this challenge 2 years ago. At that time the modeing software would only be able to handle a piece at a time and without the sugars. I don't know what is available presently and how this field has advanced.

From John_Hermance@brown.edu, Geology

Visualize earth's magnetic field as it varies over time.

Visualize ground water dynamics.

From Tom Banchoff (tfb@cs) Mathematics

Create an interactive Cave application for interacting with mathematical surfaces in four dimensions. Evaluate whether the environment helps students in Math 8 understand the surfaces better.

From Anne Spalter, Computer Science (ams@cs, x3-7615)

I have a number of ideas about projects having to do with color color. The main goal of the project is to make it easier and more enjoyable to choose effective colors in computer graphics programs. Specific research underway includes:

1. Better interfaces for choosing and modifying colors in graphics applications

2. Designing expert palettes and their interfaces

3. Color palette organization

4. Visualization of color spaces

Here's a more specific project idea:

The Problem: The effects of different combinations of colors can be somewhat successfully described and predicted by color theory but 1. there are many dissenting opinions and 2. the theory tends to breakdown for real-life compositions (that have more than 2-5 colors in simple juxtaposed shapes).

A possible solution using visualization techniques: If the preferences for complex color interactions could be more easily studied, a theory of such interactions could possibly be developed. We propose plotting color-averaged areas of famous works (so that the preference factor is widely agreed upon) in a five-dimensional space -- RGB or HSV and x and y -- and looking for clusters. One of the chief challenges would be finding such clusters (if they exist) in a higher dimensional space. Some of it could be done analytically but we'd like to have a visual means of exploration too.

A possible solution using computational modeling: For the mathematically inclined, it is possible to model an image by dividing it into a n x n color-averaged area grid and think about each color area as one location in an n x n-dimensional space. One would plot many images this way and then rate them on a preference scale. Each image is then expressed as a long equation in which each location is a variable and has a co-efficient. Using the preference results, one can solve a huge set of simultaneous equations to solve for the co-efficients.

From David Cooper, Engineering/LEMS (cooper@lems)

One of my graduate RA's has been working on modifying algebraic surfaces in an interactive way. This involves ways to change parameters or add synthetic data, specifying regions of use, visualizing the modifying information and the resulting representation, etc. I am sure we can find an interesting challenging project for your students, and Andrew, my grad student, and they could interact on the project. Let me know if that seems interesting to you.

From Frederic Leymarie (leymarie@lems) Engineering/LEMS

Visualization of topographical features in graphs z=h(x,y)

Height functions are a very common form of data to deal with in various application domains: geography, hydrography, surface material properties, single view range imagery, etc.

Different approaches exist to map such graphs to feature maps, based on some geometric features of the height functions. These range from local (differential) to global (regional) approaches to capture features like: ridge/valley lines, to regional segmentation and watersheds.

Each method has its pros and cons and its favored applications.

The first project I have in mind is to implement a number of these approaches, from local to regional and find good ways to visualize the results and compare these.

The second project - might be included in the first - is to do the same kind of work for a method I developed a few years ago. It is derived from an old concept in geometry: the Indicatrix (of Dupin). Basically you take slice of the graph of varying size (notion of scale space introduced) and look at the local shape of the cut. For continuous (Morse) functions you have a small set of possibilities of interest. The interest of this method is that it alloes to merge local and regional methods in one framework. It also relates to topography in a more "comprehensive" way by defining valleys and ridges as elongated regions rather than lines on a surface. Flow fields can be derived to illustrate how the different labels (cut types) interact on any given height function. One difficulty is how to express the additional dimension of scale.

The above two methods are particularly relevant to vision (and geography, etc.).

The third project is to use the same approaches as above in the context of 3D surface this time. A height function can be introduced either regionally (view points) of locally (normal field).

I can describe a 4th project, if you need more ideas ... It would be about the volumetic and surfacic flowfields involved in the computation of 3D skeletons. But since I am still developping the algorithms to retrieve these, this might prove too early.

The algorithms for projects 1 and 2 exist though. For 1 there are many recent papers on the topic. For 2 I have my own set of algorithms (implemented under the Khoros environement, in C) and results.

I'll be happy to discuss any of the above in more details.

New ideas for 2000

You may want to add to that paragraph, as a source of data and interest in spatial geology, the use of new (more accurate) topographical data from Mars. The extraction of significant rivulets and their viz. in the context of the planet neighborhood, may help decide ... if there is water on the red planet.

(1) Interactive 2D CAD:
Explore the use of 2D skeletal graphs a la Kimia to design 2D free-form shapes. We have conducted some initial work along these lines- a project called Genesys. This can be combined with existing techniques in Sketch/JOT.
For example, a drawing gesture defines the graph loci. Another gesture (hand pressure ?) defines the max ball varying radius. This can be used to generate canal (tubular) surfaces interactively. Other more complex surfaces can have their skeleton first computed and presented to the user for further modification. The interface is a big issue here. The manipulation of tree-like graph too.
Sci.Viz can be explored by drawing 2D shapes over a background or workspace textured with images of complex geometry. E.g. extraction of significant pathways on radiological (2D) imagery.

(2) Process-grammar for 2D shapes.
Michael Leyton (Rutgers U.) introduced a process grammar more than 10 years ago. http://www.lems.brown.edu/vision/people/leymarie/Refs/Perception/LeytonM.html 2D blooby shapes, like live cells using pseudopods, can have their dynamics modeled via this grammar. The grammar is based on the symmetry-curvature duality principle which relates certain extrema of curvature to 2D skeletal graph branch ends.
The goal here would be to explore the predictive power of the grammar for such natural shapes. This can then be extended to growing/evolving biological shapes to study shape properties through time.
The viz. of this complex data - initially we have the trace of outlines through time - in a more compact and significant way - via the features of the process grammar of Leyton - may lead to significant advances in understanding these important biological processes.
I am in touch with Leyton.

(3) Interactive 3D CAD & Reverse engineering of free-form shapes
Do what is proposed in (1) above, but in 3D. Sketch/JOT provides a starting point. Use of single snapshots via perspective has already been explored using Sketch (cf. myself and Loring Holden).
Use 3D skeletal graphs to either directly input/overlay tubular branching surfaces, http://www.lems.brown.edu/vision/people/leymarie/Notes/CurvSurf/Surfaces.html or via an image analysis of 3D input scans (like we get at the SHAPE Lab), to obtain a graph of more complex shapes. http://www.lems.brown.edu/vision/extra/SHAPE/
Then explore the use of Sci.Viz. to analyze these constructs (e.g. to each skeletal sheet and curve is associated a flow field, 2D and 1D, respectively, provided by the change in the radius function of maximal balls).
By interactively changing these flows, show the deformation in shape of the original object.
Study the nature of the simplifications introduced by pruning the skeletal graph.
Perform similar applications to (1), like extracting lung or coronary pathways, but in the basis of 3D medical data.

(4) STITCH visualization
Got to the web site to see what this project is about: http://www.cs.brown.edu/research/graphics/research/sciviz/archaeology/stitch/
But, basically, we now have many different geometrical constructs we can extract from the 3D scans of pottery sherds, like global axes and curvatures: http://www.lems.brown.edu/vision/extra/SHAPE/Presentations/Xevi00/XeviDrew00Aug23.html http://www.lems.brown.edu/vision/extra/SHAPE/Presentations/YanCao00Aug/YanCao00Aug9.html skeletal graphs:
http://www.lems.brown.edu/vision/extra/SHAPE/Presentations/Leymarie00Aug16/Leymarie00Aug16.html curvature maps & interactive ridge/valley following: http://www.cs.brown.edu/research/graphics/research/sciviz/archaeology/stitch/live-wire.html
How to make sense of all these (3D) geometric features?
Also, it would be interesting to interactively reconstruct a pot, from the 3D scanned sherds and using force fields (attractive and repulsive) based on the above geometric construct. E.g. curvature flow fields in the vicinity of breaks could guide the reconstruction (initially) interactively.

(5) Visualization of space curves, surface intersections and central loci
Consider the computation and visualization of bisectors, i.e., the surfaces at mid-distance between surface patches (by pair, triplet, quadruplet ...).
Starting with even the simplest patches: triangles or polygons in 3D space, the bisectors are general quadrics, i.e., trivariate 2nd degree implicit polynomial surfaces (IPS).
The intersection of these quadrics generate space curves: in general trivariate quartics. We may call these tri-sectors by analogy.
In order to compute such space curves, one must rely on numerical computations in practice for the non-degenerate cases (i.e., conics and lines).
There are many ways to do this:
(i) simplify the system of implicit polynomials describing the intersecting bisectors, on the basis of the theory of resultants (a.k.a. the PhD of J. Canny).
(ii) numerically solve the system of equations as is, assuming a good initial guess.
(iii) isolate an initial point on the curve, by introducing another set of equations, e.g. via Lagrange multipliers (say we minimize distance), solve for this point, and then trace the curve in two opposite directions, relying on differential geometry.
(iv) partition space in an octree and solve the equations at increasing resolution, by checking intercepts with lattice faces.
Depending on the initial conditions and the numerical stability (or lack thereof) of the method used/implemented, we will have different behaviors.
Thus, one goal of this study is to visualize in clever ways the numerical stability and accuracy of algorithms for the intersection of IPS.
Another is to visualize the special 3D geometry of central loci, such as the initial points in (iii) above.
Now, we started with the "simple" case on polygons - which by the way need to be broken down in vertices, edges, and planar sections, giving 6 possible bisectors/quadrics. Once we tackle the above, we can turn our attention to a larger set of input objects, by adding :
- canal surfaces, including Dupin's cyclides (whose bisectors degenerate to space curves)
- the extension of our "polygonal elements": spheres & cylinders of varying radii.
- other natural quadrics: cones
- cubics
- etc.
Even when only considering the polygons --> quadrics case, this is both challenging and useful enough. Indeed these represent the main class of CAD primitives of use today. The intersection of these objects is necessary to be computed precisely, for example to feed a Numerical Machining tool.
Blending (soft/smooth joins) is another area to be explored here, which involves similar techniques as discussed above.
This project would likely be valuable to people with interest in: Applied Maths, Comp. Geometry, CAD & CAGD, Molecular chemistry, and shape modeling in general.

(6) Art & Visual perception
The objective here would be to evaluate the work of Rudolph Arneim et al :
http://www.lems.brown.edu/vision/people/leymarie/Refs/VisualArt/General.html
Arneim introduces the concept of Psychological forces in understanding our visual percepts and applies this to the study of visual arts. This is reminiscent of the Gestalt school.
Quoting Arneim:
"What a person perceives is not only an arrangement of objects, colors, shapes, movements and sizes, but, perhaps first of all, an {\em interplay of directed tensions}. The latter are inherent in any percept. Because they have magnitude and direction they are called {\em psychological forces}."
These force fields are described (in his books and in others'), and one goal would be to implement a set of these ideas and visualize them on paintings, and eventually sculptures, illusory outlines, etc.
One could go further and modify a piece of art by acting on these force field patterns, creating more visual tension or reducing it. The potential for a better understanding of what the artist wanted to convey is palpable.
This project would be of interest to students in Arts, Comp. Graphics, Cog. Sci., Comp. Vision (at least).


David Laidlaw
Last modified: Fri Apr 12 14:41:22 EDT 2002