James James Hays
Manning Assistant Professor
Computer Science Department, Brown University

My research interests span computer graphics, computer vision, and computational photography. My research focuses on using "Internet-scale" data and crowd-sourcing to improve scene understanding and allow smarter image synthesis and manipulation. I am part of the Graphics, Visualization, and Interaction group at Brown.

I received my Ph.D. from Carnegie Mellon University in 2009, working with Alexei Efros. I worked with Antonio Torralba as a postdoc at Massachusetts Institute of Technology.

Brown contact
email: hays at cs.brown.edu
office: 555 CIT building. Office hours MW 1pm
mail: Box 1910, Brown Univ, Providence, RI 02912


Students and Collaborators

I am currently recruiting PhD students


Ph.D. Students

Visiting Students

Master's Students

  • Patsorn Sangkloy alumni: Xiaofeng Tao, Chao Qian, Chen Xu, Hang Su, Vibhu Ramani, Paul Sastrasinh, Vazheh Moussavi, Yun Zhang, David Dufresne, Sirion Vittayakorn, Arcady Goldmints-Orlov

Undergraduate Researchers

  • alumni: Hari Narayanan, Sam Birch, Leela Nathan, Eli Bosworth, Jung Uk Kang, Reese Kuppig, Fuyi Huang, Travis Webb


Transient Attributes for High-Level Understanding and Editing of Outdoor Scenes.
Pierre-Yves Laffont, Zhile Ren, Xiaofeng Tao, Chao Qian, and James Hays.
Siggraph 2014.

Project Page, Paper

Good Image Priors for Non-blind Deconvoluton: Generic vs Specific.
Libin Sun, Sunghyun Cho, Jue Wang, and James Hays.
ECCV 2014.

Project Page

Solving Square Jigsaw Puzzles with Loop Constraints.
Kilho Son, James Hays, and David B. Cooper.
ECCV 2014.

Project Page, Paper

Microsoft COCO: Common Objects in Context.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona,
Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick.
ECCV 2014.

Project Page, Paper

The SUN Attribute Database: Beyond Categories for Deeper Scene Understanding.
Genevieve Patterson, Chen Xu, Hang Su, and James Hays.
International Journal of Computer Vision. vol. 108:1-2, 2014. Pp 59-81.

Project Page, Paper

Previously published as:
SUN Attribute Database: Discovering, Annotating, and Recognizing Scene Attributes.
Genevieve Patterson and James Hays. CVPR 2012. Paper

Basic level scene understanding: categories, attributes and structures.
Jianxiong Xiao, James Hays, Bryan C. Russell, Genevieve Patterson, Krista A. Ehinger,
Antonio Torralba, and Aude Oliva.
Frontiers in Psychology, 2013, 4:506.

This paper is a survey of recent work related to the SUN database.


Cross-View Image Geolocalization.
Tsung-Yi Lin, Serge Belongie, and James Hays.
CVPR 2013.


FrameBreak: Dramatic Image Extrapolation by Guided Shift-Maps.
Yinda Zhang, Jianxiong Xiao, James Hays, and Ping Tan.
CVPR 2013.

Project Page, Paper

Edge-based Blur Kernel Estimation Using Patch Priors.
Libin "Geoffrey" Sun, Sunghyun Cho, Jue Wang, and James Hays.
ICCP 2013.

Project Page, Paper

Dating Historical Color Images.
Frank Palermo, James Hays, and Alexei A Efros.
ECCV 2012.

Project Page, Paper

How do humans sketch objects?
Mathias Eitz, James Hays, and Marc Alexa.
Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2012.

Project Page, Paper

Previously presented as:
Learning to classify human object sketches
Mathias Eitz and James Hays.
ACM SIGGRAPH 2011 Talks Program.

Super-resolution from Internet-scale Scene Matching.
Libin "Geoffrey" Sun and James Hays.
International Conference on Computational Photography (ICCP) 2012.

Project Page, Paper

Quality Assessment for Crowdsourced Object Annotations.
Sirion Vittayakorn and James Hays.
British Machine Vision Conference (BMVC) 2011.

Project page, Paper, Bibtex

Scene categorization and detection: the power of global features
James Hays, Jianxiong Xiao, Krista Ehinger, Aude Oliva, and Antonio Torralba.
Vision Sciences Society annual meeting (VSS) 2010.

SUN Database: Large-scale Scene Recognition from Abbey to Zoo
Jianxiong Xiao, James Hays, Krista Ehinger, Aude Oliva, and Antonio Torralba.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2010.

Project page, Paper, Browse database

we present the extensive Scene UNderstanding (SUN) database containing 899 categories and 130,519 images. We use 397 well-sampled categories to benchmark numerous state-of-the-art algorithms for scene recognition. We measure human scene classification performance on the SUN database and compare this with computational methods.

Ph.D. Thesis: Large Scale Scene Matching for Graphics and Vision
Thesis Page

Our visual experience is extraordinarily varied and complex. The diversity of the visual world makes it difficult for computer vision to understand images and for computer graphics to synthesize visual content. But for all its richness, it turns out that the space of "scenes" might not be astronomically large. With access to imagery on an Internet scale, regularities start to emerge - for most images, there exist numerous examples of semantically and structurally similar scenes. Is it possible to sample the space of scenes so densely that one can use similar scenes to "brute force" otherwise difficult image understanding and manipulation tasks? This thesis is focused on exploiting and refining large scale scene matching to short circuit the typical computer vision and graphics pipelines for image understanding and manipulation.

Image Sequence Geolocation with Human Travel Priors
Evangelos Kalogerakis, Olga Vesselova, James Hays, Alexei A. Efros, and Aaron Hertzmann.
IEEE International Conference on Computer Vision (ICCV '09)

Project Page

An empirical study of Context in Object Detection
Santosh Divvala, Derek Hoiem, James Hays, Alexei A. Efros, and Martial Hebert.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2009.

Project Page, Paper

IM2GPS: estimating geographic information from a single image
James Hays and Alexei Efros.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2008.

Project Page, Paper, Bibtex

Google Tech Talk.

Abstract: Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we will leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earth's surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban/rural classification.

Scene Completion Using Millions of Photographs
James Hays and Alexei Efros.
Transactions on Graphics (SIGGRAPH 2007). August 2007, vol. 26, No. 3.

Project Page, SIGGRAPH Paper, CACM Paper, CACM Technical Perspective by Marc Levoy, Bibtex

Abstract: What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.

Earlier Research



My research is funded by NSF Career award (1149853), by IARPA's Finder program (FA8650-12-C-7212), and by gifts from Google, Microsoft, Pixar, and Adobe.