Welcome to the Center for Responsible AI at New York University

Our goal is to build a future in which responsible AI is synonymous with AI. Our work centers around interdisciplinary research, technology policy, and education and training for AI practitioners, decision makers, and the public at large.

What is responsible AI? We use this term to refer to making the design, development and use of AI socially sustainable: using technology for good while controlling the risks. Responsible AI is about respecting human values, ensuring fairness, maintaining transparency, and upholding accountability. It’s about taking hype and magical thinking out of the conversation about AI. And about giving people the ability to understand, control and take responsibility for AI-assisted decisions.

News
Sep 26, 2025 Launch of the New York AI Exchange, our ambitious new initiative bringing together education, research, policy and practice to strengthen New York’s AI ecosystem. Learn more here.
Sep 3, 2025 The #RAIforUkraine program kicked off Fall 2025 with a record number of applicants: 40 new Research Fellows selected from nearly 90 submissions and 21 new mentors, alongside many returning colleagues representing 12 countries. Watch the Open House recording here.
Aug 10, 2025 Four papers developed through #RAIforUkraine collaborations accepted to the 51st International Conference on Very Large Data Bases (VLDB 2025) in London. Learn more here.
Jul 29, 2025 Applications for the #RAIforUkraine Fall 2025 Cohort are NOW open! Are you a prospective student or mentor? Click here to learn more and get involved.
Jul 28, 2025 NYU Tandon Profs. Julia Stoyanovich and Ludovic Righetti receive funding from the NSF’s Ethical and Responsible Research (ER2) program for a new project. Learn more here.
May 21, 2025 On May 21, Julia Stoyanovich delivered an AI 101 presentation to New York State legislators and staff as part of New York State’s AI Week.
May 7, 2025 On May 7-9, NYU R/AI and the Center for Robotics and Embodied Intelligence co-hosted a three-day event in partnership with the United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI), on ‘Promoting Responsible Innovation in AI for Peace and Security.’ Learn more about the event.
May 1, 2025 On May 1, Julia Stoyanovich delivered the CISE Distinguished Lecture, titled “Follow the Data! Responsible AI Starts with Responsible Data Management,” hosted by the NSF.
Apr 8, 2025 On April 8, Julia Stoyanovich testified before the U.S. House of Representatives at a Research & Technology Subcommittee Hearing of the Committee on Science, Space and Technology, titled “DeepSeek: A Deep Dive.”
Feb 28, 2025 On February 28, Julia Stoyanovich led a session on AI’s fundamentals and ethical implications at a high level U.N. workshop on AI and International Humanitarian Law (IHL)–a gathering of diplomats, U.N. representatives, and legal experts shaping global discussions on AI’s role in warfare.
Selected Publications
  1. Estimating the impact of the Russian invasion on the displacement of graduating high school students in Ukraine
    Tetiana Zakharchenko, Andrew Bell, Nazarii Drushchak, Oleksandra Konopatska, Falaah Arif Khan, and Julia Stoyanovich
    Nature Communications 2025
  2. ShaRP: Explaining Rankings and Preferences with Shapley Values
    Venetia Pliatsika, João Fonseca, Kateryna Akhynko, Ivan Shevchenko, and Julia Stoyanovich
    Proc. VLDB Endow. 2025
  3. Still More Shades of Null: An Evaluation Suite for Responsible Missing Value Imputation
    Falaah Arif Khan, Denys Herasymuk, Nazar Protsiv, and Julia Stoyanovich
    Proc. VLDB Endow. 2025
  4. SHAP-based Explanations are Sensitive to Feature Representation
    Hyunseung Hwang, Andrew Bell, Joao Fonseca, Venetia Pliatsika, Julia Stoyanovich, and Steven Euijong Whang
    In Conference on Fairness, Accountability, and Transparency, ACM FAccT 2025
  5. CREDAL: Close Reading of Data Models
    George Fletcher, Olha Nahurna, Matvii Prytula, and Julia Stoyanovich
    In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) at ACM SIGMOD 2025
  6. Responsible Model Selection with Virny and VirnyView
    Denys Herasymuk, Falaah Arif Khan, and Julia Stoyanovich
    In Companion of the International Conference on Management of Data, SIGMOD/PODS, Santiago, Chile 2024
  7. Epistemic Parity: Reproducibility as an Evaluation Metric for Differential Privacy
    Lucas Rosenblatt, Bernease Herman, Anastasia Holovenko, Wonkwon Lee, Joshua R. Loftus, Elizabeth McKinnie, Taras Rumezhak, Andrii Stadnik, Bill Howe, and Julia Stoyanovich
    SIGMOD Rec. 2024
  8. Responsible AI literacy: A stakeholder-first approach
    Daniel Dominguez, and Julia Stoyanovich
    Big Data and Society 2023
  9. Fairness in Ranking, Part I: Score-Based Ranking
    Meike Zehlike, Ke Yang, and Julia Stoyanovich
    ACM Computing Surveys 2023
  10. A Simple and Practical Method for Reducing the Disparate Impact of Differential Privacy
    Lucas Rosenblatt, Julia Stoyanovich, and Christopher Musco
    In Thirty-Eighth AAAI Conference on Artificial Intelligence 2024