About Research+Code Blog

About

I'm a fourth-year PhD student at Brown University advised by (the incredible) Ellie Pavlick. I also work with Stefanie Tellex, George Konidaris, Michael Littman, and a lot of the other wonderful people at Brown. As an undergrad, I was advised by Ani Nenkova and Byron Wallace in various areas of machine learning and language processing.

My research uses language to structure reinforcement learning, aiming towards building more intelligent and interpretable agents that can learn to use language to communicate and coordinate with each other. Language can be a powerful tool to help agents learn and adapt from small amounts of human-intelligible data. I'm specifically interested in (1) using the structure of language to aid reinforcement learning and multi-agent algorithms, (2) allowing language to be used for communication between agents and (3) methods for better interpretability of models that use language to allow safer and more ethical systems.

Apart from work, I enjoy reading vast amounts of literature, various kinds of music and mostly just programming for fun. Feel free to reach out with research related questions or otherwise!

pr pr pr pr pr pr

Appropriate Incongruities in Prototype Theory

pr

Research

What I’m most interested in is creating frameworks that incorporate language knowledge, RL exploration strategies and human-level inference, to work towards building systems that reason and act at a level that is at par with human intelligence. This includes augmenting existing reinforcement learning algorithms with language supervision, allowing multi-agent algorithms to use and extend to natural language, as well as modeling and probing interactions between agents to better interpret and explain their behaviours.


Where I've Been

Microsoft Research: Microsoft Turing Academic Program
Worked with Dean Carignan, Saurabh Tiwary, Pooya Moradi, Ali Alvi and others at MSR.
Summer 2021-current.

DeepMind, London: Research Intern (Multi-agent Reinforcement Learning)
Worked with Angeliki Lazaridou, Richard Everett, Edward Hughes and Yoram Bachrach.
Summer 2020.

Google AI, Mountain View: Research Intern (Vision and Language Reinforcement Learning)
Worked with Alex Ku and Jason Baldridge.
Summer 2019.

Johns Hopkins University: Jelinek Summer Workshop on Speech and Language Technology (JSALT)
Worked with Ellie Pavlick, Brown University; Sam Bowman, New York University; Tal Linzen, Johns Hopkins University.
Summer 2018.

Max Planck Institute: Cornell, Maryland, Max Planck Pre-doctoral Research School (CMMRS)
Summer 2018.

University of Pennsylvania: Undergraduate Researcher
Worked with Ani Nenkova, University of Pennsylvania and Byron Wallace, Northeastern University.
Summer 2017-18.

Princeton University: Program in Algorithmic and Combinatorial Thinking (PACT)
Led by Rajiv Gandhi, Rutgers University, Camden.
Summer 2016.


Tutorials

Recognising Multimodal Entailment.
Afsaneh Shirazi, Arjun Gopalan, Arsha Nagrani, Cesar Ilharco, Christina Liu, Gabriel Barcik, Jannis Bulian, Jared Frank, Lucas Smaira, Qin Cao, Ricardo Marino, Roma Patel.
ACL 2021.


Papers

2022

Mapping Language Models to Grounded Conceptual Spaces.
Roma Patel and Ellie Pavlick.
ICLR 2022.

Generalising to New Domains by Mapping Natural Language to Lifted LTL.
Eric Hsiung, Hiloni Mehta, Junchi Chu, Xinyu Liu, Roma Patel, Stefanie Tellex, George Konidaris.
ICRA 2022.

2021

Does linguistic bias affect generative language models?
Roma Patel and Ellie Pavlick.
EMNLP 2021.

Game-theoretic Vocabulary Selection for Text Classification Tasks
Roma Patel, Marta Garnelo, Ian Gemp, Chris Dyer and Yoram Bachrach.
NAACL 2021.

Affordance-based Robot Object Retrieval
Thao Nguyen, Nakul Gopalan, Roma Patel, Ellie Pavlick, Stefanie Tellex.
AuRO 2021.

2020

Room-Across-Room: Multilingual Vision-and Language Navigation with Dense Spatiotempral Grounding
Alexander Ku*, Peter Anderson*, Roma Patel, Eugene Ie, Jason Baldridge.
EMNLP 2020.

On the Relationship Between Structure in Natural Language and Models of Sequential Decision Processes
Roma Patel*, Rafael Rodriguez-Sanchez*, George Konidaris.
LAREL Workshop, ICML 2020.

Grounding Language to Non-Markovian Tasks with No Supervision of Task Specifications.
Roma Patel, Ellie Pavlick, Stefanie Tellex.
RSS 2020.

Robot Object Retrieval with Contextual Natural Language Queries.
Thao Nguyen, Nakul Gopalan, Roma Patel, Matthew Corsaro, Ellie Pavlick, Stefanie Tellex.
RSS 2020.

2019

How to Get Past Sesame Street: Sentence-Level Pretraining Beyond Language Modeling
Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas Mccoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick and Samuel R. Bowman.
ACL 2019. PDF

Planning with State Abstractions for Non-Markovian Task Specifications
Yoonseon Oh, Roma Patel, Thao Nguyen, Baichuan Huang, Ellie Pavlick, Stefanie Tellex.
RSS 2019. PDF

Learning Visually Grounded Meaning Representations with Sketches
Roma Patel, Stephen Bach and Ellie Pavlick.
How2 Workshop, ICML 2019. PDF

Learning to Ground Language to Temporal Logical Form.
Roma Patel, Ellie Pavlick and Stefanie Tellex.
SpLU & RoboNLP Workshop, NAACL 2019. PDF

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick.
StarSEM. 2019. (Best Paper Award!) PDF

Looking for ELMo's Friends: Sentence-Level Pretraining Beyond Language Modeling.
Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, and Berlin Chen.
Unpublished manuscript. 2019. PDF
i

2018

Modeling Ambiguity in Text: A Corpus of Legal Literature.
Roma Patel and Ani Nenkova.
Unpublished manuscript. 2018. PDF

A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature.
Benjamin Nye, Jessy Li, Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova and Byron Wallace.
ACL 2018. PDF

Syntactic Patterns Improve Information Extraction for Medical Literature
Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova and Byron Wallace.
NAACL 2018. PDF


Lectures and Invited Talks

Columbia University, Data Science Institue
Title: Learning from Patterns for Information Extraction for Medical Literature

Princeton University, PACT Summer Program
Title: Network Flows


In today's garden path sentences: The prime number few.