Talks and presentations

How User Sociodemographics Inadvertently Affect LLM-User Conversations

March 06, 2026

Talk, CL Group Seminar, Groningen, The Netherlands

Generative Large Language Models (LLMs) personalize responses based on perceived user characteristics, a phenomenon called implicit personalization. This can enhance user experience but also introduce or amplify bias when models infer sociodemographic attributes from subtle conversational cues. In this talk, I present a systematic investigation of how LLMs infer and act on such information when confronted with stereotypical cues. Using controlled synthetic conversations, we show that models extract demographic attributes from stereotypical cues and encode them in latent user representations; notably, for several groups, these inferences persist even when users explicitly identify with a different demographic group. I will also present work in which we examine the methodological foundations of persona-based bias research by comparing six commonly used sociodemographic cues across seven LLMs on a range of tasks. Although outputs generated from different cues are often correlated, we observe substantial variance in how personas are realized, underscoring LLM sensitivity to prompt formulation and cautioning against drawing conclusions from a single cue. Together, our findings highlight both the prevalence and malleability of demographic inference in LLMs and argue for greater transparency, methodological rigor, and user control in personalization research and deployment.

Social (un)safety of LLMs

January 28, 2026

Talk, Deep Tech Day 2026, Amsterdam, The Netherlands

Amsterdam AI invited me to give a talk as part of their “Technology for people” session at the Deep Tech Day 2026.

Visit to MilaNLP

November 24, 2025

Oral presentation, MilaNLP, Bocconi University, Milan, Italy

Presentation of “Reading Between the Prompts: How Stereotypes Shape LLM’s Implicit Personalization” and ongoing work.

NatWest Group DS Seminar Series

August 28, 2025

Oral presentation, NatWest Group DS Seminar Series, Online

Presentation of “Reading Between the Prompts: How Stereotypes Shape LLM’s Implicit Personalization”.

HumanCLAIM Workshop

March 26, 2025

Poster presentation, HumanCLAIM Workshop, Göttingen, Germany

Poster presentation of “Cross-Lingual Transfer of Debiasing and Detoxification in Multilingual LLMs: An Extensive Investigation”.

Comparing and Mitigating Bias and Toxicity Across Languages

March 14, 2025

Talk, CL Group Seminar, Groningen, The Netherlands

Large language models (LLMs) are being used by vast amounts of speakers over the world, and show remarkable performance in many non-English languages. However, they often only receive safety fine-tuning in English, if at all, and their performance is known to be inconsistent across languages. There is therefore a need to investigate to what extent LLMs exhibit harmful biases and toxic behaviors across languages and how such harmful behaviors can best be reduced. In this talk I will discuss my work which shows that stereotypical bias exhibited by LLMs differs significantly depending on the language they are prompted in. Furthermore, we show that mitigation of these stereotypical biases and toxic behaviors performed in English transfers to other languages, though often at the expense of decreased language generation ability in those non-English languages.