<video controls width="640" height="360" poster="http://streamod.cs.brown.edu:8801/z/mdres.jpg" > <source type="video/mp4" src="http://streamod.cs.brown.edu:8801/z/mdres.mp4" /> <source type="video/ogg" src="http://streamod.cs.brown.edu:8801/z/mdres.ogv" /> <applet code="com.fluendo.player.Cortado.class" archive="/cortado/cortado.jar" width="640" height="360"><param name="url" value="http://streamod.cs.brown.edu:8801/z/mdres.ogv"/></applet></video>
Genevieve Patterson, Ph.D. Candidate
Crowd One Shot Learning: Fine-Grained Classifiers Created by Non-experts
In order to create classifiers for fine-grained visual concepts, experts must painstakingly label training instances that may be rare or difficult to identify. Cheap crowd workforces are available, but they seemingly lack the expertise to annotate many fine-grained concepts. We present an approach for training classifiers that does not require thousands of expert labels or extensive crowd worker training. Our primary contribution is a new approach for crowd-in-the-loop computer vision that exploits the human capacity for one-shot learning. Our method uses a small number of expert annotations to seed a crowd-driven active learning system that shepherds non-expert annotators into improving classifiers for fine-grained concepts. We demonstrate that it is possible to train high precision classifiers starting with one or few visual examples. Our crowd classifier pipeline is tested on fine-grained categories from the labeled CUB 200 dataset and an unlabeled dataset of fashion images. We compare our crowd-driven active learning system to (1) classifiers created using only expert labels and (2) classifiers created by the crowd with no expert guidance. We demonstrate that our pipeline can dramatically increase classifier accuracy over baseline methods. Our pipeline suggests possibilities for leveraging crowd workers in non-traditional stages of the classifier creation process.