The ongoing war in Ukraine has severely disrupted the lives of
hundreds of thousands of people, significantly impacting university
students by displacing many and interrupting their education. To
address this, NYU R/AI launched #RAIforUkraine, a fully remote
academic research program in partnership with the Ukrainian Catholic University (UCU)
in Lviv, Ukraine. In the short term, the program aims to provide a
sense of normalcy, and high-quality research opportunities, to
students in Ukraine. Its long-term goal is to strengthen Ukraine’s
research capacity in responsible AI.
#RAIforUkraine launched in June 2022, and has been running on the
academic calendar (September through August) since Fall 2023. The
program is open to advanced undergraduate and graduate students who live in
Ukraine and are enrolled in degree programs in computer science,
information systems and related fields at accredited Ukrainian
universities. These students—RAI Research Fellows—are
mentored by academic researchers from U.S. and European universities,
and conduct cutting-edge collaborative research on a range of
responsible AI topics. Students receive academic credit and
competitive stipends.
Since 2022, our program has supported 112 students from 18 Ukrainian universities (including 40 new recruits in the Fall 2025 cohort). Guided by 70 mentors, ranging from faculty to PhD students and postdocs at 35 academic institutions across 12 countries, our Research Fellows have engaged in 50+ projects, leading to 17 peer-reviewed publications in top computer science and data science venues to date—a number that continues to grow with each completed project. As the program expands, so do these achievements, strengthening global collaboration and fostering peace and cooperation.
For Prospective Research Fellows
Gain research experience: You will work on a RAI research project of
your choice, under the guidance of distinguished faculty from
universities across the U.S. and Europe, and in close collaboration with
their doctoral students or postdocs.
Build your resume: You will receive an affiliation as a Research
Fellow at the NYU Center for Responsible AI, enhancing your academic
profile with international credibility.
Receive academic credit / stipend: You will receive academic
credit towards your degree in the Fall semester; and a competitive
stipend if you are selected to continue in the Spring and Summer
terms.
Establish global collaborations: You will collaborate with peers
from diverse backgrounds, gaining valuable international perspectives
and cultural insights.
Applications for the Fall 2025 cohort are now closed. Stay tuned for further updates!
“This program had a great balance between practical and theory based learning. We spoke a lot about ethics, general fairness, philosophy
and sociology. But it was also practical because we were looking at data based on everything we had discussed.”A. Holovenko, 2022 RAI Research Fellow, graduate student at Ukrainian Catholic University (UCU)
"I met new people and I had never worked in this field and it was new to me. I got to learn what its about and we read others papers. Previously, I only had exposure to Machine Learning research, but this time I got to read papers on sociology which gave me a new perspective because I had no idea about it before. Just learnt a bunch of new things I had never tried or experienced."A. Standnik, 2022 RAI Research Fellow, undergraduate student, UCU
"For me, this internship was an invaluable opportunity to learn new concepts 'on the job.' I particularly appreciated its structured approach, which included both hands-on projects and weekly lectures covering various topics in Responsible AI."D. Herasymuk, 2023 RAI Research Fellow, graduate student, UCU
"This program is extremely interesting and allowed me to meet incredible people with whom we wrote a paper and present it at FAccT 2023. Also, we continue to work on further research. This program introduced me to the area of Responsible AI, which formed the basis of my master's thesis and made me interested in continuing this topic in PhD."N. Drushchak, 2023 RAI Research Fellow, graduate student, UCU
"I would like to thank the organizers and mentors for the opportunity to learn research. High-level organization, program structure, and constant support from mentors are three key factors that allowed me to improve myself. This program is a great example of such high-quality research training. The experience has definitely helped me understand the essence of research and has been imprinted on me for years."D. Orel, 2023 RAI Research Fellow, graduate student, NaUKMA
"This collaboration gave me an opportunity to write an "A" bachelor's thesis, and I was able to implement my research for a practical application in a company called RelationalAI, where I got the Research Intern position. Overall, the program gave me great connections and an opportunity to bring my ideas into reality. My mentors helped me at every step of that journey, and none of this would’ve been possible without the NYU R/AI."M. Bondarenko, 2023 RAI Research Fellow, graduate student, UCU
"RAI Research Program turned my 3rd year at UCU into an incredible journey in an academic environment! Throughout the year, I’ve gained hands-on experience in going from an abstract idea to a prototype, iteratively refining it, and finally wrapping up the results concisely. This was very different from my previous experience and taught me to think out of the box and be brave with my ideas! The program is a great opportunity for young students to try themselves in real high-quality research, which is incredibly important for Ukrainian academia."R. Mutel, 2023 RAI Research Fellow, undergraduate student, UCU
For Prospective Mentors
Support Ukraine: You will support Ukrainian students during
critical times, helping maintain and elevate educational standards.
Establish global collaborations: You will engage in cutting-edge
research with motivated and talented students, leading to
potential co-authored publications. These students will receive
academic credit and competitive stipends, with funding provided by
NYU R/AI.
Enhance your mentorship skills: You will enhance your mentorship
skills by guiding students through collaborative research projects,
and receive acknowledgement for your mentorship efforts, bolstering
your academic and professional profile.
Participate in cultural exchange: You will gain insights into
diverse cultural perspectives, enhancing your global understanding
and intercultural skills.
Expressions of interest for the Fall 2025 cohort are now closed. Beyond Fall 2025, please reach out to Julia Stoyanovich at [email protected]. We are excited to hear from you!
For Prospective Donors
Support Ukraine: You will leave a lasting impact on the
educational landscape in Ukraine by empowering and supporting the
next generation of scholars during a critical time.
Build a better future: You will contribute to the development of
responsible AI principles and practices that will shape the future
of technology and society. Your contribution will help make
“responsible AI” synonymous with “AI”.
Strengthen international collaboration: You will support a
unique program that bridges cultural and geographical gaps,
promoting peace and cooperation between nations. By doing so, you
will help create a diverse, international research community,
fostering cross-cultural understanding and collaboration.
Click here
to support #RAIforUkraine. We greatly appreciate your generosity and commitment to empowering the next generation of Ukrainian scholars!
Selected Publications
Estimating the impact of the Russian invasion on the displacement of graduating high school students in Ukraine
Tetiana Zakharchenko, Andrew Bell, Nazarii Drushchak, Oleksandra Konopatska, Falaah Arif Khan, and Julia Stoyanovich
@article{ukraine_edu25,title={Estimating the impact of the {Russian} invasion on the displacement of graduating high school students in {Ukraine}},author={Zakharchenko, Tetiana and Bell, Andrew and Drushchak, Nazarii and Konopatska, Oleksandra and Khan, Falaah Arif and Stoyanovich, Julia},year={2025},journal={Nature Communications},keywords={journal,education,policy,RAIforUkraine},}
ShaRP: Explaining Rankings and Preferences with Shapley Values
Venetia Pliatsika, João Fonseca, Kateryna Akhynko, Ivan Shevchenko, and Julia Stoyanovich
@article{sharp,author={Pliatsika, Venetia and Fonseca, Jo{\~{a}}o and Akhynko, Kateryna and Shevchenko, Ivan and Stoyanovich, Julia},title={{ShaRP}: Explaining Rankings and Preferences with Shapley Values},journal={Proc. {VLDB} Endow.},volume={18},number={11},year={2025},doi={10.14778/3749646.3749682},keywords={journal,data,ranking,explainability,transparency,RAIforUkraine}}
Still More Shades of Null: An Evaluation Suite for Responsible Missing Value Imputation
Falaah Arif Khan, Denys Herasymuk, Nazar Protsiv, and Julia Stoyanovich
@article{shades,title={Still More Shades of {Null}: An Evaluation Suite for Responsible Missing Value Imputation},author={{Arif Khan}, Falaah and Herasymuk, Denys and Protsiv, Nazar and Stoyanovich, Julia},year={2025},journal={Proc. {VLDB} Endow.},volume={18},number={9},doi={14778/3746405.3746416},keywords={journal,RAIforUkraine,data}}
Measurement and Metrics for Content Moderation: The Multi-Dimensional Dynamics of Engagement and Content Removal on Facebook
Laura Edelson, Borys Kovba, Hanna Yershova, Austin Botelho, Damon McCoy, and Tobias Lauinger
@article{Edelson_Kovba_Yershova_Botelho_McCoy_Lauinger_2025,title={Measurement and Metrics for Content Moderation: The Multi-Dimensional Dynamics of Engagement and Content Removal on Facebook},volume={2},url={https://tsjournal.org/index.php/jots/article/view/220},doi={10.54501/jots.v2i5.220},number={5},journal={Journal of Online Trust and Safety},author={Edelson, Laura and Kovba, Borys and Yershova, Hanna and Botelho, Austin and McCoy, Damon and Lauinger, Tobias},year={2025},keywords={RAIforUkraine},}
CREDAL: Close Reading of Data Models
George Fletcher, Olha Nahurna, Matvii Prytula, and Julia Stoyanovich
In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) at ACM SIGMOD 2025
@inproceedings{credal2025,title={{CREDAL}: Close Reading of Data Models},author={Fletcher, George and Nahurna, Olha and Prytula, Matvii and Stoyanovich, Julia},year={2025},booktitle={Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) at ACM SIGMOD},url={https://dl.acm.org/doi/10.1145/3736733.3736737},keywords={data,RAIforUkraine}}
ONION: A Multi-Layered Framework for Participatory ER Design
Viktoriia Makovska, George Fletcher, and Julia Stoyanovich
In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) at ACM SIGMOD 2025
@inproceedings{onion20205,title={{ONION}: A Multi-Layered Framework for Participatory ER Design},author={Makovska, Viktoriia and Fletcher, George and Stoyanovich, Julia},year={2025},booktitle={Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) at ACM SIGMOD},url={https://dl.acm.org/doi/10.1145/3736733.3736736},keywords={data,RAIforUkraine}}
Reducing Human Effort in Evaluating Small and Medium Language Models as Students and as Teachers
Oleh Prostakov, Viacheslav Hodlevskyi, Nassim Bouarour, Adam Sanchez-Ayte, Noha Ibrahim, and Sihem Amer-Yahia
In Proceedings of the 6th Workshop on Data Science with Human in the Loop (DaSH) at VLDB 2025
@inproceedings{prostakov25,title={Reducing Human Effort in Evaluating Small and Medium Language Models as Students and as Teachers},author={Prostakov, Oleh and Hodlevskyi, Viacheslav and Bouarour, Nassim and Sanchez-Ayte, Adam and Ibrahim, Noha and Amer-Yahia, Sihem},year={2025},booktitle={Proceedings of the 6th Workshop on Data Science with Human in the Loop (DaSH) at VLDB},keywords={RAIforUkraine}}
On Adversarial Robustness of Language Models in Transfer Learning
Bohdan Turbal, Anastassia Mazur, Jiaxu Zao, and Mykola Pechinizkiy
In Proceedings of the Workshop on Socially Responsible Language Modeling Research at NeurIPS 2024
@inproceedings{turbal24,title={On Adversarial Robustness of Language Models in Transfer Learning},author={Turbal, Bohdan and Mazur, Anastassia and Zao, Jiaxu and Pechinizkiy, Mykola},booktitle={Proceedings of the Workshop on Socially Responsible Language Modeling Research at NeurIPS},year={2024},keywords={RAIforUkraine}}
Responsible Model Selection with Virny and VirnyView
Denys Herasymuk, Falaah Arif Khan, and Julia Stoyanovich
In Companion of the International Conference on Management of Data,
SIGMOD/PODS, Santiago, Chile 2024
@inproceedings{DBLP:conf/sigmod/HerasymukKS24,author={Herasymuk, Denys and Khan, Falaah Arif and Stoyanovich, Julia},editor={Barcel{\'{o}}, Pablo and Pi, Nayat S{\'{a}}nchez and Meliou, Alexandra and Sudarshan, S.},title={Responsible Model Selection with Virny and VirnyView},booktitle={Companion of the International Conference on Management of Data,
{SIGMOD/PODS}, Santiago, Chile},pages={488--491},publisher={{ACM}},year={2024},url={https://doi.org/10.1145/3626246.3654738},doi={10.1145/3626246.3654738},keywords={conference,demo,RAIforUkraine,data},author+an={1=self;2=self;3=self}}
Epistemic Parity: Reproducibility as an Evaluation Metric for Differential
Privacy
Lucas Rosenblatt, Bernease Herman, Anastasia Holovenko, Wonkwon Lee, Joshua R. Loftus, Elizabeth McKinnie, Taras Rumezhak, Andrii Stadnik, Bill Howe, and Julia Stoyanovich
@article{DBLP:journals/sigmod/RosenblattHHLLMRSHS24,author={Rosenblatt, Lucas and Herman, Bernease and Holovenko, Anastasia and Lee, Wonkwon and Loftus, Joshua R. and McKinnie, Elizabeth and Rumezhak, Taras and Stadnik, Andrii and Howe, Bill and Stoyanovich, Julia},title={Epistemic Parity: Reproducibility as an Evaluation Metric for Differential
Privacy},journal={{SIGMOD} Rec.},volume={53},number={1},pages={65--74},year={2024},url={https://doi.org/10.1145/3665252.3665267},doi={10.1145/3665252.3665267},keywords={journal,privacy,RAIforUkraine},author+an={1=self;3=self;4=self;6=self;7=self;8=self;10=self}}
The Possibility of Fairness: Revisiting the Impossibility Theorem
in Practice
Andrew Bell, Lucius Bynum, Nazarii Drushchak, Tetiana Zakharchenko, Lucas Rosenblatt, and Julia Stoyanovich
In Proceedings of the ACM Conference on Fairness, Accountability,
and Transparency, FAccT, Chicago, IL, USA 2023
@inproceedings{DBLP:conf/fat/BellBDZRS23,author={Bell, Andrew and Bynum, Lucius and Drushchak, Nazarii and Zakharchenko, Tetiana and Rosenblatt, Lucas and Stoyanovich, Julia},title={The Possibility of Fairness: Revisiting the Impossibility Theorem
in Practice},booktitle={Proceedings of the {ACM} Conference on Fairness, Accountability,
and Transparency, FAccT, Chicago, IL, USA},pages={400--422},publisher={{ACM}},year={2023},doi={10.1145/3593013.3594007},keywords={conference,fairness,RAIforUkraine},author+an={1=self;2=self;3=self;4=self;5=self;6=self}}
An Interactive Introduction to Causal Inference
Lucius E.J. Bynum, Falaah Arif Khan, Oleksandra Konopatska, Joshua R. Loftus, and Julia Stoyanovich
VISxAI: Workshop on Visualization for AI Explainability 2022
@article{bynum2022interactive,author={Bynum, Lucius E.J. and Khan, Falaah Arif and Konopatska, Oleksandra and Loftus, Joshua R. and Stoyanovich, Julia},title={An Interactive Introduction to Causal Inference},journal={VISxAI: Workshop on Visualization for AI Explainability},year={2022},site={https://r-ai.co/ci-playground},publisher={IEEE},keywords={workshop,education,playground,RAIforUkraine},}
Epistemic Parity: Reproducibility as an Evaluation Metric for Differential
Privacy
Lucas Rosenblatt, Bernease Herman, Anastasia Holovenko, Wonkwon Lee, Joshua R. Loftus, Elizabeth Mckinnie, Taras Rumezhak, Andrii Stadnik, Bill Howe, and Julia Stoyanovich
@article{DBLP:journals/pvldb/RosenblattHHLLM23,author={Rosenblatt, Lucas and Herman, Bernease and Holovenko, Anastasia and Lee, Wonkwon and Loftus, Joshua R. and Mckinnie, Elizabeth and Rumezhak, Taras and Stadnik, Andrii and Howe, Bill and Stoyanovich, Julia},title={Epistemic Parity: Reproducibility as an Evaluation Metric for Differential
Privacy},journal={Proc. {VLDB} Endow.},volume={16},number={11},pages={3178--3191},year={2023},doi={10.14778/3611479.3611517},timestamp={Mon, 23 Oct 2023 16:16:16 +0200},biburl={https://dblp.org/rec/journals/pvldb/RosenblattHHLLM23.bib},keywords={journal,privacy,RAIforUkraine},author+an={1=self;3=self;4=self;6=self;7=self;8=self;10=self}}
On Fairness and Stability: Is Estimator Variance a Friend or a Foe?
Falaah Arif Khan, Denys Herasymuk, and Julia Stoyanovich
@article{DBLP:journals/corr/abs-2302-04525,author={Khan, Falaah Arif and Herasymuk, Denys and Stoyanovich, Julia},title={On Fairness and Stability: Is Estimator Variance a Friend or a Foe?},journal={CoRR},volume={abs/2302.04525},year={2023},doi={10.48550/arXiv.2302.04525},eprinttype={arXiv},eprint={2302.04525},keywords={working,fairness,stability,RAIforUkraine},}