Recognizing Facial Expressions in Image Sequences using Local Parameterized Models of Image Motion

(with Yaser Yacoob)

This work explores the use of local parametrized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model non-rigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust, and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performed with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences.

Related Publications

Black, M. J. and Yacoob, Y., Recognizing facial expressions in image sequences using local parameterized models of image motion, Int. Journal of Computer Vision, 25(1), pp. 23-48, 1997; also Xerox PARC, Technical Report SPL-95-020, March 1995.

Black, M. J. and Yacoob, Y. Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion, Fifth International Conf. on Computer Vision, ICCV'95, Boston, MA, June 1995, pp 374-381.

Black, M. J., Yacoob, Y., and Ju, X. S., Recognizing human motion using parameterized models of optical flow, to appear: Motion-Based Recognition, Eds. Mubarak Shah and Ramesh Jain, Kluwer Academic Publishers.