Skip to content
@irohs-lab

IRoHS Lab

Overview

Our lab's work aims to build reliable machine learning systems that everyone can trust. We believe that equitable and reliable access to these systems is integral to cultivating broad-based societal trust in a technology as transformative as machine learning. At the Intelligent Robust and Honest Systems (IRoHS) Lab, we are interested in the following research questions:

  1. Auditing complex deployed models: Do machine learning models perform well when a fraction of their training data is compromised? Are they resilient to out-of-distribution or even adversarial inputs? How are deployed models regulated? See 1, 2 and 3.
  2. Proving fundamental limits on reliability: Is it possible to determine how robust any model can be under adverse conditions such as training- and test-time attacks? What can we say about the ease of learning such models, and their compliance with regulations? See 4 and 5.
  3. Building reliable models: Can we build models that are resilient against multiple types of adverse conditions? How do we utilize knowledge of fundamental limits to build better models? Are oft-overlooked methods such as kernel machines the path to interpretable and robust models? Can machine learning models outperform rule-based models in security-critical domains? See 6, 7, 8, 9 and 10.
  4. Learning with distributed data and models: In domains where data is scattered across entities with privacy and proprietary data concerns, how can performant models be trained? Can synthetic data and generative models be used to alleviate these concerns? Are distributed models reliable? See 11, 12, and 13.

We are also broadly interested in questions of building user trust in complex machine learning systems by alleviating bias and providing users with the tools to avoid the intrusive aspects of these systems. In the IROHS Lab, we use a broad set of mathematical tools from optimisation, optimal transport, learning theory, and graph theory, as well as an assortment of techniques from systems security, networking, and even qualitative data analysis to solve these research questions. We are passionate about research and the scientific method along with being friendly, respectful, and inclusive.

We collaborate with researchers all over the world, with particularly strong connections to Princeton University (Prateek Mittal), Pennsylvania State University (Daniel Cullina), the University of Chicago (Ben Zhao, Nick Feamster), and King's College London (Isabela Parisio, Deborah Olukan), as well as within India, with close ties to the Indian Institute of Science (Danish Pruthi) and the Indian Institute of Technology Madras (Krishna Pillutla). Our group's research is generously supported by the SBI Foundation and C-MInDS at IIT Bombay.

Current Members

Popular repositories Loading

  1. robust-ml-course-notes robust-ml-course-notes Public

    Notes consolidation for DS603

    TeX 1 17

  2. log-loss-lower-bounds log-loss-lower-bounds Public

    Forked from arjunbhagoji/log-loss-lower-bounds

    New version of log loss bounds code

    Python

  3. .github .github Public

Repositories

Showing 3 of 3 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…