In the early stage of my PhD, I focused on estimating 2D and/or 3D body shapes from images which have a lot of applications ranging from computer vision, computer graphics, and commercial applications (e.g Kinect). Later, I developed a clothing model that is able to automatically dress any body shape in any pose realistically. I also worked on hair animation at Disney Research, Pittsburgh.
I like applying Machine Learning techniques to vision and graphics problems. I believe that Machine learning can be used to construct data-driven 3D human bodies, clothing, and hair that 1) can be estimated from sensor data; 2) produce realistic animations; 3) are low-dimensional enough to be computationally practical; 4) may be applied to broader computer vision tasks or real-time applications.
Real-time hair animation is difficult primarily because the number of hair strands we need to model is huge. We present a data-driven method for learning hair models that enables the creation and animation of many interactive virtual characters in real-time (for gaming, character pre-visualization and design). Our model builds on recent success of reduced models for clothing and fluid simulation, but extends them in a number of significant ways. The users can interactively specify the constituent factors of hair appearance such as length, softness, and wind strength/direction. We formulate collision handling as an optimization in this reduced sub-space using fast iterative least squares.
Physics-Based Simulation (PBS) for clothing animation is great and it produces very realistic results. However, it has a key limitation that hinders itself to be applied to internet-scale virtual fashion -- no adaptation to different body shapes. One has to manually pick (or even design) the most appropriate cloth size for every specific body shape. This is NOT ACCEPTABLE to internet-scale virtual fashion, because the number of people we need to dress is huge and there is too much human labor involved.
DRAPE is a data-driven clothing model that is learned from physically simulated clothing examples. It is fast, realistic, completely automatic at run time, and most importantly it adapts to different body shapes in any pose. Simulate your clothing once, and use it everywhere.
Clothing obscures body shape. We need to estimate the naked body shape under clothing in scenarios such as surveillance. We model clothing as an additional layer from the body contour and this model greatly improves 2D body shape estimation from a single image. We show the contour of image observation with clothing (red) and estimated underlying body shape (blue).
To the best of our knowledge, this is the first work to estimate the detailed 3D body shape from a single image. The biggest difficulty is the lost of depth information. We use silhouette, edge, and most importantly shading cues to obtain accurate body shape estimation.
This is a neat work to automatically localize the facial feature points on the 3D faces. This is a straight forward 3D extension to the very famous Active Shape Model. I use the results to do 3D face recognition in my other paper.