Brown CS News

George Konidaris And Collaborators Win The IJCAI-JAIR Best Paper Award

Click the links that follow for more news about George Konidaris or other recent accomplishments by our faculty.

Presented annually, the IJCAI-JAIR Best Paper Award is given to an outstanding paper published in The Journal of Artificial Intelligence Research in the preceding five calendar years. Today, Professor George Konidaris of Brown CS and his collaborators, Leslie Pack Kaelbling of MIT (formerly a Brown CS faculty member) and Tomás Lozano-Pérez of MIT, received the award for their 2018 paper, "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning". The Prize Committee was comprised of eight leading members of the AI community, and their decision was based on both the significance of the paper and the quality of its presentation. The award was presented virtually today, during the opening ceremony of IJCAI 2020, which was postponed from its original date due to the COVID pandemic.

George is the director of the Intelligent Robot Lab, which aims to build intelligent, autonomous, general-purpose robots that are generally capable in a wide variety of tasks and environments. "In this paper," he says, "we address a core question at the heart of intelligent robotics: how should robots construct symbolic representations that are simultaneously rich enough to support planning, and impoverished enough to make it efficient?"

As just one example, George explains that when you decide to travel from one city to another, you can't possibly construct a plan that describes the sequence of muscle movements that you plan to take, even though that's ultimately how the plan must be executed. Instead, you reason using very high-level actions ("take a taxi to the airport") and very high-level abstractions of state ("I'm at the airport") and fill in the remaining details on the fly. Humans are capable of deftly switching to abstract symbolic representations that boil the problem down to its essentials. Robots aren't innately capable of that kind of abstract reasoning, and typically operate at the lowest level of detail – at the pixel and motor level. That quickly becomes infeasible, and severely limits their capabilities.

Konidaris's approach focuses on the interplay between high-level motor skills and perceptual abstractions, often called symbolic representations. The paper resolves the question of how to build a symbolic abstraction for high-level planning with a given set of motor skills, showing that the robot's motor skills uniquely specify the appropriate symbolic representation required for planning. The resulting framework is practical enough that the appropriate representation can be autonomously learned by a robot operating at the pixel level, from scratch.

"Understanding how to automatically frame a task at the right level of abstraction is critical to achieving general intelligence, but has been paid almost no attention," says Konidaris. "I'm very excited about the new possibilities this work opens up, and curious to see where they lead."

A blog post about the paper that includes animations and the video above is available here.

For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.