"Effective Reinforcement Learning through State Abstraction"
Thursday, November 8, 2018 at 4:00 P.M.
Room 368 (CIT 3rd Floor)
Reinforcement Learning presents a challenging problem: agents must generalize experiences, efficiently explore their world, make use of a limited computational budget, and learn from feedback that is sparse and delayed. Abstraction is essential to all of these endeavors. Through abstraction, agents can form concise models of both their surroundings and behavior, affording effective decision making in diverse and complex environments. In this work, we characterize the role of state abstraction in reinforcement learning. We offer three desiderata articulating what it means for a state abstraction to be useful, and prove when and how a state abstraction can satisfy these desiderata. Our primary contributions develop theory for state abstractions that can 1) Preserve near-optimal behavior, 2) Be learned and computed efficiently, and 3) Lower planning or learning time. Collectively, these results provide a partial path toward abstractions that are guaranteed to minimize the complexity of decision making while still retaining near-optimality. We close by discussing the road forward, which focuses on an information theoretic paradigm for analyzing abstraction, and a framework for using state abstraction to construct hierarchies that adhere to the introduced desiderata.
Host: Professor Michael Littman