I'm a second-year PhD student at Brown University advised by (the incredible) Ellie Pavlick. I also work with Stefanie Tellex and George Konidaris and a lot of the other wonderful people at Brown.
As an undergrad, I was advised by Ani Nenkova and Byron Wallace in various areas of language processing and machine learning. My research uses language to structure reinforcement learning, aiming towards building more intelligent, interpretable and ethical agents. Specifically, I'm interested in building models for natural language understanding using a combination of compositional, logical methods and deep representations to help uncover better "meaning representations" that can help across a variety of grounded tasks. I'm also interested in modeling and introducing concrete world knowledge representations into existing models; specifically in settings that require agents to co-ordinate and reason pragmatically in different cooperative contexts.
Apart from work, I enjoy reading vast amounts of literature, various kinds of music and mostly just programming for fun. Feel free to reach out with research related questions or otherwise!
Appropriate Incongruities in Prototype Theory
What I’m most interested in is building systems that can infer knowledge, reason and act in the way that humans do, by creating frameworks that incorporate language knowledge, RL exploration strategies and human-level inference. This includes modeling interactions between agents that first reason and respond with respect to goal-oriented information at hand, but then also allow world knowledge to alter their reasoning. More recently, I’m interested in learning semantic correspondences between language, images and mental depictions of concepts we encounter and learning semantic representations for structures in text.
DeepMind, London: Research Intern (Multi-agent Reinforcement Learning)Worked with Angeliki Lazaridou, Richard Everett, Edward Hughes and Yoram Bachrach. Summer 2020.
Google AI, Mountain View: Research Intern (Grounded Language and Learning)Worked with Alex Ku and Jason Baldridge. Summer 2019.
Johns Hopkins University: Jelinek Summer Workshop on Speech and Language Technology (JSALT)Worked with Ellie Pavlick, Brown University; Sam Bowman, New York University; Tal Linzen, Johns Hopkins University. Summer 2018.
Max Planck Institute: Cornell, Maryland, Max Planck Pre-doctoral Research School (CMMRS)Summer 2018.
University of Pennsylvania: Undergraduate ResearcherWorked with Ani Nenkova, University of Pennsylvania and Byron Wallace, Northeastern University. Summer 2017-18.
Princeton University: Program in Algorithmic and Combinatorial Thinking (PACT)Led by Rajiv Gandhi, Rutgers University, Camden. Summer 2016.
Columbia University, Data Science Institue Title: Learning from Patterns for Information Extraction for Medical Literature
Princeton University, PACT Summer Program Title: Network Flows
2020
Room-Across-Room: Multilingual Vision-and Language Navigation with Dense Spatiotempral Grounding Alexander Ku*, Peter Anderson*, Roma Patel, Eugene Ie, Jason Baldridge. EMNLP 2020.On the Relationship Between Structure in Natural Language and Models of Sequential Decision Processes Roma Patel*, Rafael Rodriguez-Sanchez*, George Konidaris. LAREL Workshop, ICML 2020.
Grounding Language to Non-Markovian Tasks with No Supervision of Task Specifications. Roma Patel, Ellie Pavlick, Stefanie Tellex. RSS 2020.
Robot Object Retrieval with Contextual Natural Language Queries. Thao Nguyen, Nakul Gopalan, Roma Patel, Matthew Corsaro, Ellie Pavlick, Stefanie Tellex. RSS 2020.
2019
How to Get Past Sesame Street: Sentence-Level Pretraining Beyond Language Modeling Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas Mccoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick and Samuel R. Bowman. ACL 2019. PDF
Planning with State Abstractions for Non-Markovian Task Specifications Yoonseon Oh, Roma Patel, Thao Nguyen, Baichuan Huang, Ellie Pavlick, Stefanie Tellex. RSS 2019. PDF
Learning Visually Grounded Meaning Representations with Sketches Roma Patel, Stephen Bach and Ellie Pavlick. How2 Workshop, ICML 2019. PDF
Learning to Ground Language to Temporal Logical Form. Roma Patel, Ellie Pavlick and Stefanie Tellex. SpLU & RoboNLP Workshop, NAACL 2019. PDF
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick. StarSEM. 2019. (Best Paper Award!) PDF
Looking for ELMo's Friends: Sentence-Level Pretraining Beyond Language Modeling. Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, and Berlin Chen. Unpublished manuscript. 2019. PDFi
2018
Modeling Ambiguity in Text: A Corpus of Legal Literature. Roma Patel and Ani Nenkova. Unpublished manuscript. 2018. PDF
A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature. Benjamin Nye, Jessy Li, Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova and Byron Wallace. ACL 2018. PDF
Syntactic Patterns Improve Information Extraction for Medical Literature Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova and Byron Wallace. NAACL 2018. PDF