About Research+Code Blog

About

I'm currently an undergraduate starting as a PhD student at Brown University.

As an undergrad, I was advised by Ani Nenkova and Byron Wallace in various areas of language processing and machine learning. I'm interested in building models for natural language understanding using a combination of compositional, logical methods and deep representations to help uncover better "meaning representations" of the text we read. I'm also interested in modeling and introducing concrete world knowledge representations into existing models; specifically in reference game settings that require agents to co-ordinate and reason pragmatically based on the knowledge and context at hand. In the past, my work has also involved NLP and ML methods for medical literature, to optimise the process of evidence-Based medicine.

Apart from work, I enjoy reading vast amounts of literature, various kinds of music and mostly just programming for fun. Feel free to reach out with research related questions or otherwise!

pr pr pr pr pr pr

Appropriate Incongruities in Prototype Theory

pr

Research

What I’m most interested in is building systems that can infer knowledge and reason in the way that humans do, by creating frameworks that incorporate common sense knowledge and human-level inference. This includes modeling interactions between agents that first reason and respond with respect to goal-oriented information at hand, but then also allow world knowledge to alter their reasoning. More recently, I’m interested in learning semantic correspondences between language, images and mental depictions of concepts we encounter and learning semantic representations for structures in text.


Where I've Been

University of Pennsylvania: Undergraduate Researcher
Advised by Ani Nenkova, University of Pennsylvania and Byron Wallace, Northeastern University. Summer 2017-18.

Johns Hopkins University: Jelinek Summer Workshop on Speech and Language Technology (JSALT)
Advised by Ellie Pavlick, Brown University; Sam Bowman, New York University; Tal Linzen, Johns Hopkins University. Summer 2018.

Max Planck Institute: Cornell, Maryland, Max Planck Pre-doctoral Research School (CMMRS)
Summer 2018.

Princeton University: Program in Algorithmic and Combinatorial Thinking (PACT)
Advised by Rajiv Gandhi, Rutgers University, Camden. Summer 2016.


Lectures and Invited Talks

Columbia University, Data Science Institue
Title: Learning from Patterns for Information Extraction for Medical Literature

Princeton University, PACT Summer Program
Title: Network Flows


Papers

Learning to Ground Language to Temporal Logical Form.
Roma Patel, Stefanie Tellex and Ellie Pavlick. Combined Workshop on Spatial Language Understanding (SpLU) & Grounded Communication for Robotics (RoboNLP) at NAACL 2019.

Looking for ELMo's Friends: Sentence-Level Pretraining Beyond Language Modeling.
Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, and Berlin Chen. Unpublished manuscript. 2019.

Modeling Ambiguity in Text: A Corpus of Legal Literature.
Roma Patel and Ani Nenkova. Unpublished manuscript. 2018.

A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature.
Benjamin Nye, Jessy Li, Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova and Byron Wallace. ACL 2018.

Syntactic Patterns Improve Information Extraction for Medical Literature
Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova and Byron Wallace. NAACL 2018.

In today's garden path sentences: The prime number few.