Building multimodal AI that sees, reasons, and communicates with data

Based at York University, our research Lab brings together NLP, visualization, and multimodal LLMs to make data interaction more responsible, interpretable, and human-centered.

RESEARCH THEMES

Multimodal LLMs for Visualization Understanding and Generation

Benchmarks and models for chart comprehension, narrative generation, and visual data communication—shaping how multimodal LLMs interpret, explain, and generate visualizations from and for real analytic workflows.

Agentic AI for Data Science Workflows

Building multimodal agents that plan actions, interact with analytic tools, refine outputs, and collaborate with humans across realistic, end-to-end data science workflows.

Human-Centered & Responsible AI for Data Visualization

Designing accessible, trustworthy, and goal-aligned AI systems that collaborate with people, mitigate bias and deception, and support diverse analytical needs in data visualization.

Visual & Interactive Document Analytics

Developing interfaces and visual analytics techniques that help users read, explore, and interpret text-rich documents.

Language Understanding, Summarization, and Evaluation

Advancing question answering, summarization, domain adaptation, and trustworthy evaluation methods that strengthen the foundations of language-driven analytics.