Me
Ruochen Wang 王若宸
OpenAI

Abstract

I am a Research Scientist at Sora.

I have a strong interest in ventures and startups.

Education

[01/2020 - 12/2024] Ph.D. in Computer Science, University of California, Los Angeles
[09/2015 - 08/2019] B.S. in Computer Science and Statistics, University of Michigan, Ann Arbor
[09/2013 - 07/2015] B.S. in Finance Honors Program (transferred), Shanghai University of Finance and Economics

Awards (Selected)

  • Outstanding Graduate Student (1 per department) - UCLA CS Department, 05/2022.
  • Outstanding Paper Award (1/8) - ICLR, 04/2021.
  • Award of Excellence - Microsoft Research Asia, 09/2019.
  • Highest Distinction Graduate - The University of Michigan, 08/2019.
  • Fung’s Excellence Scholarship, UC Berkeley Graduate Admission Committee, 03/2019.
  • Outstanding Intern Award, SenseTime, 01/2019.
  • James B. Angell Scholar, The University of Michigan, 2017-2019.
  • Shanghai Scholarship, 2014
  • Renmin Scholarship - First Prize, 2014
  • Publications and Manuscripts

    VisualThinker-R1-Zero: The Multimodal “Aha Moment” on 2B Base Models
    Preprint
    [Paper] [Code]
    TurningPoint AI
    MOSSBench: Is Your Multimodal Language Model Oversensitive to Safe Queries?
    ICLR, 2025
    TurningPoint AI
    Large Language Models are Interpretable Learners
    ICLR, 2025
    [Paper] [Code]
    Google
    The Crystal Ball Hypothesis in diffusion models: Anticipating object positions from initial noise
    ICLR, 2025
    [Paper]
    TurningPoint AI
    Solving for X and Beyond: Can Large Language Models Solve Complex Math Problems with More-Than-Two Unknowns?
    EMNLP, 2024
    Understanding the Impact of Negative Prompts: When and How Do They Take Effect?
    ECCV, 2024
    [Paper]
    TurningPoint AI
    MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion
    Arxiv, 2024
    TurningPoint AI
    DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers
    EMNLP, 2024
    TurningPoint AI
    One Prompt is not Enough: Automated Construction of a Mixture-of-Expert Prompts
    ICML, 2024
    TurningPoint AI
    On Discrete Prompt Optimization for Diffusion Models
    ICML, 2024
    Google
    Ameliorate Spurious Correlations in Dataset Condensation
    ICML, 2024
    [Paper]
    Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
    ICML, 2023
    [Paper]
    FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
    CVPR, 2023
    [Paper]
    DC-BENCH: Dataset Condensation Benchmark
    NeurIPS, 2022
    Efficient Non-Parametric Optimizer Search for Diverse Tasks
    NeurIPS, 2022
    [Paper] [Code]
    Generalizing Few-Shot NAS with Gradient Matching
    ICLR, 2022
    [Paper] [Code]
    Learning to Schedule Learning Rate with Graph Neural Networks
    ICLR, 2022
    [Paper]
    RANK-NOSH: Efficient Predictor-Based NAS via Non-Uniform Successive Halving
    ICCV, 2021
    [Paper] [Code]
    Rethinking Architecture Selection in Differentiable NAS
    ICLR, 2021
    [Paper] [Code]
    Outstanding Paper Award (1/8)
    DrNAS: Dirichlet Neural Architecture Search
    ICLR, 2021
    [Paper] [Code]

    Miscellaneous

    Reading and Martial Arts

  • I have a bit of a chaotic reading habit, jumping between around 10 books at the same time. Topics include but are not limited to history, philosophy, business, science and engineering.
  • I practice Katori Shintō-ryū (main branch), trained directly by current Shihan Dai (師範代).
  • Top