"Learning to see without a teacher"

Phillip Isola, UC Berkeley

Thursday, March 23, 2017 at 12:00 Noon

Room 368 (CIT 3rd Floor)

Over the past decade, learning-based methods have driven rapid progress in computer vision. However, most such methods still require a human "teacher" in the loop. Humans provide labeled examples of target behavior, and also define the objective that the learner tries to satisfy. The way learning plays out in nature is rather different: ecological scenarios involve huge quantities of unlabeled data and only a few supervised lessons provided by a teacher (e.g., a parent). I will present two directions toward computer vision algorithms that learn more like ecological agents. The first involves learning from unlabeled data. I will show how objects and semantics can emerge as a natural consequence of predicting raw data, rather than labels. The second is an approach to data prediction where we not only learn to make predictions, but also learn the objective function that scores the predictions. In effect, the algorithm learns not just how to solve a problem, but also what exactly needs to be solved in order to generate realistic outputs. Finally, I will talk about my ongoing efforts toward sensorimotor systems that not only learn from provided data but also act to sample more data on their own.

Phillip Isola is a postdoctoral scholar in the EECS department at UC Berkeley. He recently received his Ph.D. in the Brain & Cognitive Sciences department at MIT. He studies visual intelligence from the perspective of both minds and machines. He was the recipient of both the NSF Graduate Fellowship and, presently, the NSF Postdoctoral Fellowship.

Host: Michael Littman