Elias Stengel-Eskin

headshot.png

I am a Postdoctoral Research Associate at the University of North Carolina, Chapel Hill in the MURGe-Lab led by Mohit Bansal. I received my Ph.D. in 2023 from Johns Hopkins University, where I was supervised by Benjamin Van Durme and was a part the Center for Language and Speech Processing. In my work, supported by an NSF Graduate Research Fellowship, I aim to develop AI agents that can intelligently communicate and collaborate with people. A central focus of this involves communication via language: one line of my work focuses on transforming text into representations of its meaning and exploring how models represent meaning. This has included work on semantic parsing, multimodal grounding, and human-robot interaction.

Another line of work looks at implicit phenomena such as vagueness, underspecification, and ambiguity. While I’ve mostly explored these topics through a linguistic lens, I am interested in their importance to intelligence more broadly.

Before starting my Ph.D., I received my B.A.&Sc. with First Class Honours in Cognitive Science from McGill University, focusing in computer science and linguistics. While at McGill, I worked as a research assistant at the Montreal Language Modeling Lab (MLML), now MCQLL supervised by Morgan Sonderegger. I wrote my honours thesis (supervised by Timothy O’Donnell) on a variational inference algorithm for a model of language acquisition.

Research statement

news

Mar 22, 2024 Excited to be giving a keynote at the UncertaiNLP workshop at EACL 2024, titled Confidence-based Rephrasing, Refinement, and Selection. I’ll cover a wide range of topics including calibration in semantic parsing, using calibrated models to improve usability, underspecified visual question answering and much more!
Mar 5, 2024 New work with David Wan and Jaemin Cho on improving visual tasks (especially grounding) through region-based guidance in Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training
Feb 3, 2024 New work led by Justin Chen and Swarnadeep Saha on distilling multi-agent LLM interactions into smaller models: MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models . MAGDi uses a graph structure on top of LLM dialogues to distill reasoning from several large teacher models into a single, lightweight student.
Jan 30, 2024 New preprint! ReGAL: Refactoring Programs to Discover Generalizable Abstractions introduces a new refactoring-based method for learning abstractions for LLM program prediction, improving performance on a variety of tasks. Joint work with Archiki Prasad as part of my postdoc at UNC.
Jan 17, 2024 Two papers accepted to ICLR 2024. Zero and Few-shot Semantic Parsing with Ambiguous Inputs introduces a new benchmark for semantic parsing with ambiguity and tests a variety of models on how they handle five common linguistic ambiguities. Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models is the first paper from my new postdoc position and introduces RepARe, a method for augmenting and rephrasing VQA questions (especially underspecified ones) to make them easier for zero-shot VL models to answer.
Jan 16, 2024 My thesis is now publicly available: Modeling Meaning for Description and Interaction. Many thanks to my advisor Benjamin Van Durme for all of your guidance over the last five years and to my thesis committee Jacob Andreas and Kyle Rawlins for your feedback!
Jun 3, 2023 I’m incredibly excited to announce that I will be starting a Postdoc with Mohit Bansal at the University of North Carolina, Chapel Hill! Looking forward to lots of collaborations with the amazing students and faculty of UNC NLP and UNC CS!
Jun 1, 2023 Calibrated Interpretation: Confidence Estimation in Semantic Parsing has just been accepted to TACL! We examine the calibration of common semantic parsing models, including LLMs using in-context learning. Check out the paper for results across a number of tasks and datasets!
May 3, 2023 Why Did the Chicken Cross the Road? Rephrasing and Analyzing Ambiguous Questions in VQA has been accepted to ACL 2023! We introduce a brand new dataset of ambiguous questions in VQA, with a model disambiguation model and plenty of linguistic analysis. See you in Toronto!
Mar 31, 2023 I’ve restructured a previous pre-print into two different papers. The first focuses on cataloguing calibration in popular semantic parsing systems, and the second looks at what we can do with a well-calibrated model.
Feb 28, 2023 Super-CLEVR (CVPR highlight), an exciting new benchmark for generalization in vision tasks led by Zhuowan Li now accepted to CVPR 2023 as a highlight (~2% of submissions)! Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning
Nov 30, 2022 I am on the job market for faculty, postdoc, and industry positions! Please reach out if know of a role that would be a good fit for me: elias.stengel@gmail.com
Nov 29, 2022 Two new preprints out! On ambiguity in VQA and on calibration in semantic parsing
Oct 7, 2022 Two new papers accepted to EMNLP 2022. Preprints out on arxiv! On subject and object control in LLMs and on a troubling quirk in NLU
Mar 6, 2022 I am starting a year-long internship at MSR Montreal with Marc-Alexandre Côté, Eric Yuan, and Pierre-Yves Oudeyer
Aug 31, 2021 I have completed an internship at Microsoft Semantic Machines, supervised by Yu Su