"Improving Natural Language Understanding for Robotics through Semantic Parsing, Dialog, and Multi-modal Perception"
Jesse Thomason, PhD Student, University of Texas at Austin
Wednesday, May 3, 2017 at 12:00 Noon
Room 368 (CIT 3rd Floor)
Robotic systems that interact with untrained human users must be able to understand and respond to natural language commands and questions. If a user requests “take me to Alice’s office”, the system and user must know that Alice is a person who owns some unique office. Similarly, if a user requests “bring me the heavy, green mug”, the system and user must both know “heavy”, “green”, and “mug” are properties that describe an object in the environment, and have similar ideas about to what objects those properties apply. To facilitate deployment, methods to achieve these goals should require little initial in-domain data. In this talk, I will describe my work on understanding human language commands using sparse initial resources for semantic parsing and language grounding. Clarification dialog with humans simultaneously resolves misunderstandings and generates more training data for better downstream parser performance, while multi-modal grounding classifiers enable the robotic system to understand object properties like “green” and “heavy”. Additionally, I explore the task of word sense synonym set induction, which aims to discover polysemy and synonymy, potentially helpful in the presence of sparse data and ambiguous properties such as “light” (light-colored versus lightweight).
Jesse Thomason is a fourth year PhD student working with Dr. Raymond Mooney at the University of Texas at Austin Computer Science Department (UTCS) at the growing intersection of natural language processing and robotics. He collaborates with Dr. Peter Stone, also of UTCS. He is supported by a National Science Foundation Graduate Research Fellowship and has published at AI and NLP venues (IJCAI, NAACL, COLING).
Host: Professor Stefanie Tellex/HCRI