The 43rd IPP Symposium

Large Scale Learning of Face Manifolds

Sanjiv Kumar, Google Research

Manifold learning provides a principled way of extracting low-dimensional nonlinear structure from high-dimensional data. In this work, we present the largest scale manifold learning so far, involving millions of face images. Specifically, I will discuss the computational challenges in nonlinear dimensionality reduction via Isomap and Laplacian Eigenmaps, using a graph containing about 18 million nodes and 65 million edges. Most manifold learning techniques require spectral decomposition of dense matrices. The complexity of such decomposition is O(n^3) where n is the number of data points. For n as large as 18 million, this computation becomes infeasible both in time and space. In this talk, I will briefly analyze two approximate SVD techniques for large dense matrices. The experiments reveal interesting counter-intuitive behaviors of the two approximations. Finally, I will show a fun application called People Hopper, which smoothly "morphs" one face into another by hopping over the face manifold.

Biography: Sanjiv Kumar received his PhD from The Robotics Institute, Carnegie Mellon University in 2005. Since then, he has been working at Google Research, NY as a Research Scientist. His research interests include large scale machine learning and computer vision, graphical models and medical imaging.