Brown CS News

New Research From Daniel Ritchie Aims For Easy, Data-Driven, Convincing Indoor Scenes

    None

    Click the links that follow for more news about Daniel Ritchie, Kai Wang, and other recent accomplishments by Brown CS faculty.

    Not only do we spend most of our lives indoors, says Brown CS Professor Daniel Ritchie, we spend a sizable percentage of our time virtually indoors, exploring computer-generated interiors. Some uses for these spaces are well known, like architects creating digital representations of buildings that don't yet exist, but others (moving furniture around a virtual living room before making a purchase, or training robots) are only beginning to become familiar to the layperson. 

    Test yourself by looking at the images throughout this story: did a machine or a person design each room? Answers are at the bottom.

    "The demand for virtual bedrooms, living rooms, offices, kitchens, and so on has never been higher," Daniel says. "Not only established fields like architecture and interior design and gaming but newer and rapidly-growing ones like robotics, virtual reality, and augmented reality – all of these need to create high-fidelity digital instances of real-world indoor scenes." 

    Screen Shot 2019-11-26 at 11.46.38 AM.png

    To fully meet the demands of those industries, Ritchie explains, we need algorithms to generate data-driven scenes, producing a variety of plausible and visually appealing results that are generated quickly and user-controllable. At the moment, no existing approach satisfies all of these requirements. But his latest project presents a scene synthesis system that aims to do all these things, as well as a plan to demonstrate the system’s effectiveness by developing or improving real-world applications.

    In the short term, we might soon be using Daniel's work to design our next living rooms. Slightly further down the road, we might use it to help train our household robots to navigate those same spaces.

    The project has three main thrusts:

    1. Building a system that can believably populate a room with objects. Given the input room, the system will determine what type of object to place next, where to place it, how it should be oriented, and which 3D model to use. The same process can synthesize a complete scene, complete a partially-assembled scene, or suggest the next object to add.
    2. To mimic the real world, the system must consider visual consistency. For instance, it shouldn't pair a modern dining chair with a rustic dining table. To maximize consistency, Daniel and his researchers will develop a model of visual style compatibility and train the system on a dataset created in collaboration with online retailer Wayfair.
    3. Explore whether these innovations can support and enhance the applications above. This includes partnerships with colleagues at Simon Fraser University to train robots with visual navigation and Wayfair to build a system that automatically stages and renders scenes. One of Daniel's graduate students has already completed a summer internship at Wayfair and is currently writing a paper with him based on that research. 

    This project, which will be funded by a recently-awarded NSF Small Grant, continues work that Daniel has collaborated on with Kai Wang, his PhD student, and published together in three papers (onetwothree) at top conferences for graphics and vision like SIGGRAPH and CVPR. Kai recently received an Adobe Research Fellowship for this work.

    Screen Shot 2019-11-26 at 11.46.58 AM.png

    "I'm really enthused that we'll be sharing interactive demos with the general public," Daniel says, "and it's very important to me that we're improving representation in CS. Our collaboration with Wayfair is being led by a female grad student, and last summer, I hosted a student from the University of Southern Mississippi who's also working in this area. We're still collaborating remotely. Two other female students will join my lab next semester, and I'll also be presenting our work to students in the Artemis Project, a Brown CS program for ninth-grade girls who are interested in STEM."

    Answers for the three scenes above: (1) person on the left and machine on the right, (2) person on the left and machine on the right, and (3) machine on the left and person on the right.

    Click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.