About Me

I am a Postdoctoral Scholar in the Computer Science Department at Stanford University, where I have the privilege of being advised by Prof. Sanmi Koyejo, in the Stanford Trustworthy AI Research (STAIR) Lab.

Research Interests

I strive to advance trustworthy and responsible AI. In particular, I conduct research on causal learning and reasoning to facilitate and enhance the capacity of intelligent systems, and also algorithmic fairness and computational justice to model and understand the social impact of computational technologies. My ultimate goal is to cultivate intelligence that is both safe and principled with the help of causal perspective and methodology, so that technology can improve our lives with transparent responsibility and clear purpose. I seek to foster a symbiotic dance between artificial and natural intelligence, where they inspire, collaborate, and enhance each other to drive scientific discovery and support societal progress.

News

May 2025 Our paper “Reflection-Window Decoding: Text Generation with Selective Refinement” is accepted to ICML 2025. We propose a selective refinement framework facilitated by the sliding reflection-window to address the sub-optimality of purely autoregressive way of LLM decoding.
January 2025 Our paper “Prompting Fairness: Integrating Causality to Debias Large Language Models” is accepted to ICLR 2025. We propose a causality-guide LLM debiasing framework, utilizing selection mechanisms to design various debiasing strategies.
September 2024 I am awarded National Institute of Justice (NIJ) Graduate Research Fellowship. Thank you NIJ!

Selected Publications

* denotes equal contribution

  1. arXivPreprint
    Algorithmic Fairness amid Social Determinants: Reflection, Characterization, and Approach
    arXiv preprint arXiv:2508.08337, 2025.
  2. Reflection-Window Decoding: Text Generation with Selective Refinement
    In Proceedings of the 42nd International Conference on Machine Learning, 2025.
  3. Prompting Fairness: Integrating Causality to Debias Large Language Models
    In Proceedings of the 13th International Conference on Learning Representations (preliminary version titled "Steering LLMs Towards Unbiased Responses: A Causality-Guided Debiasing Framework"), 2025.
  4. ICLRSpotlight
    Procedural Fairness Through Decoupling Objectionable Data Generating Components
    Zeyu TangJialu WangYang LiuPeter Spirtes, and Kun Zhang
    In Proceedings of the 12th International Conference on Learning Representations (preliminary version presented in NeurIPS 2023 AFT workshop), 2024.
  5. What-is and How-to for Fairness in Machine Learning: A Survey, Reflection, and Perspective
    Zeyu TangJiji Zhang, and Kun Zhang
    ACM Computing Surveys, 2023.
  6. Tier Balancing: Towards Dynamic Fairness over Underlying Causal Factors
    Zeyu TangYatong ChenYang Liu, and Kun Zhang
    In Proceedings of the 11th International Conference on Learning Representations (preliminary version presented in NeurIPS 2022 AFCP workshop), 2023.
  7. CLeaRSpotlight
    Attainability and Optimality: The Equalized Odds Fairness Revisited
    Zeyu Tang, and Kun Zhang
    In Proceedings of the 1st Conference on Causal Learning and Reasoning, 2022.