Brown CS News

Brown CS Alum David Abel Is A Joint AAAI/ACM SIGAI Doctoral Dissertation Award Runner-Up

None
Click the links that follow for more news about David Abel and other recent accomplishments by our alums

The Association for the Advancement for Artificial Intelligence (AAAI) is a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines, and ACM SIGAI is the Association for Computing Machinery's Special Interest Group on Artificial Intelligence. Working in concert, they present the Joint AAAI/ACM SIGAI Doctoral Dissertation Award annually to recognize and encourage superior research and writing by doctoral candidates in artificial intelligence, and Brown CS alum David Abel has just been announced as one of only two runners-up for the 2020 prize. 

Advised by Brown CS Professor Michael Littman, David's thesis ("A Theory of Abstraction in Reinforcement Learning") explores the use of abstraction to reduce the complexity of effective reinforcement learning.   

"Reinforcement learning," he explains, "defines the problem facing agents that learn to make good decisions through action and observation alone. To be effective problem solvers, such agents must efficiently explore vast worlds, assign credit from delayed feedback, and generalize to new experiences, all while making use of limited data, computational resources, and perceptual bandwidth. Abstraction is essential to all of these endeavors. Through abstraction, agents can form concise models of their environment that support the many practices required of a rational, adaptive decision maker."

In his dissertation, David starts with three desiderata for functions that carry out the process of abstraction: they should

  1. preserve representation of near-optimal behavior,
  2. be learned and constructed efficiently, and
  3. lower planning or learning time.

Abel then presents a suite of new algorithms and analysis that clarify how agents can learn to abstract according to these desiderata. Collectively, he says, these results provide a partial path toward the discovery and use of abstraction that minimizes the complexity of effective reinforcement learning.

For more information, click the link that follows to contact Brown CS Communication and Outreach Specialist Jesse C. Polhemus.