Broadly, I'm interested in how syntax interfaces with semantics, how it affects
sentence processing, and how it is found in language models. Specifically, I'm
interested in:
Understanding how scope and binding work by looking at elliptical and
multidominant structures.
Studying what symbolic grammars tell us about ambiguity resolution.
Looking for abstract syntactic features in the internal representations
learned by language models.
Understanding what kind of abstract generalizations language models make,
and to what extent are these generalizations human-like.
2025/12: My paper on conditional wh-questions with VP Ellipsis is accepted
with minor revisions to Natural Language Semantics. Preprint on
LingBuzz.
2025/10: A preprint of my NELS paper on conditional wh-questions with VP
Ellipsis is on LingBuzz.
2025/07: I gave a talk at SCiL
2025. My talk
was titled "A LSTM language model learns Hindi-Urdu case-agreement
interactions, and has a linear encoding of case." This is joint work
with Rajesh Bhatt and Brian
Dillon.
2025/05: I presented a poster at CLS
61.
My poster is titled "Japanese accusative/ablative alternation verbs are unaccusative" (poster).
2025/03: I gave a talk at GLOW 47, NYU Syntax Brown Bag and the Yale
Syntax Reading Group on conditional wh-questions with VP Ellipsis
(slides, handout).