- Home
- About
- People
- Research
- Future Students
- Current Students
- Community
- News
- Events
Home
Faculty News
Dr. Xulong Tang, an associate professor in the Department of Computer Science, is participating in a five-year $5 million NSF ExpandQISE Track 2 Grant. The project, titled “ExpandQISE: URI-PQI Collaboration - Application of Quantum Fundamentals to Advance Research and Workforce Development,” is led by the University of Rhode Island in collaboration with the Pittsburgh Quantum Institute, Carnegie Mellon University, and the University of Pittsburgh.
Dr. Ryan Shi, assistant professor in the Department of Computer Science and Intelligent Systems Program, recently received a Google Academic Research Award to address these issues and better reach farmers, particularly smallholder farmers in the global south.
Dr. Xiaowei Jia (assistant professor) received an Early Career Investigator Grant from NASA's Earth Science Division for his project “Towards Generalizable, Fair, and Knowledge Guided Machine Learning for Monitoring Earth Systems.”
Student News
Senior computer science Student Griffin J. Hurt has been named a 2024 Graduate Research Fellowship Program (GRFP) scholar.
Two students from the School of Computing and Information (SCI) Department of Computer Science have been recognized for their undergraduate research by the Computing Research Association (CRA)!
Last month, 33 students (18 in person and 15 virtually) from the School of Computing and Information (SCI) attended the Grace Hopper Conference (GHC) in Orlando, FL.
Colloquium Talks
In this talk, I present a unified framework for building embodied agents that can see, simulate, and reason. I begin by introducing methods for learning world simulators from data, arguing that visual reasoning—like textual reasoning—benefits from step-by-step processing.
This talk will highlight over a decade of research at the intersection of industrial engineering, mechanical engineering, human-computer interaction, and medicine. It will showcase how a team of engineers and medical professionals are transforming the way we prepare and assess the clinical competence of medical residents through advances in simulation technology.
As large language models (LLMs) such as ChatGPT and Gemini become increasingly integrated into research and operational workflows, a critical question arises: Can these systems be trusted to behave safely, predictably, and reliably under real-world conditions? This talk explores that question through recent findings, including our own, on how AI models behave when confronted with unexpected or adversarial inputs.








