Yulu Gan

Hi, I am Yulu Gan, a second-year CS PhD at MIT, studying AI and Science. Advised by Tomaso Poggio and Phillip Isola. My research draws inspiration from biological evolution to design algorithms and enable algorithm-hardware co-design, with a focus on understanding and improving LLMs and VLMs, such as neural thickets and evolution strategies at scale.

Updates

Invited talk at David Bau's group.
Invited talk at Cohere Labs.
Invited talk at MIT Performance Reading Group.

Recent 4 Papers * indicates equal contribution.

All publications →

Neural Thickets: Diverse Task Experts Are Dense Around Pretrained Weights

Yulu Gan, Phillip Isola

Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning

Xin Qiu*, Yulu Gan*, Conor F. Hayes*, Qiyao Liang, Yinggan Xu, Roberto Dailey, Elliot Meyerson, Babak Hodjat, Risto Miikkulainen

Self-Assembly of a Biologically Plausible Learning Circuit

Qianli Liao*, Liu Ziyin*, Yulu Gan*, Brian Cheung, Mark Harnett, Tomaso Poggio

On the Power of Decision Trees in Auto-Regressive Language Modeling

Yulu Gan, Tomer Galanti, Tomaso Poggio, Eran Malach

Selected Awards

McGovern Institute Student Fellow
Dr. Chiang Chen Fellowship
Frederick R. (1953) and Barbara Cronin Fellowship
AAAI Best Student Paper Award (project lead)
Second Place in RoboCup China Open (team lead)
First Place in China Mathematical Olympiad, Zhejiang Division
To be updated.

Reading & Talks

See all →