Papers by madhulika srikumar
SSRN Electronic Journal, 2020
for their collegial engagement on these issues and this project, as well as to all the individual... more for their collegial engagement on these issues and this project, as well as to all the individuals and organizations who contributed comments on the draft data visualization we released in summer 2019. An updated and final data visualization accompanies this report: thank you to Melissa Axelrod and Arushi Singh for their thoughtfulness and significant skill in its production.

Algorithmic transparency entails exposing system properties to various stakeholders for purposes ... more Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on explainability. Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders. However, understanding a model's specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or lacks sufficient knowledge to solve the task at hand. In this paper, we argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions. First, we discuss methods for assessing uncertainty. Then, we characterize how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems. Finally, we outline methods for displaying uncertainty to stakeholders and recommend how ...

Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021
Algorithmic transparency entails exposing system properties to various stakeholders for purposes ... more Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on explainability. Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders. However, understanding a model's specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or lacks sufficient knowledge to solve the task at hand. In this paper, we argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions. First, we discuss methods for assessing uncertainty. Then, we characterize how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems. Finally, we outline methods for displaying uncertainty to stakeholders and recommend how ...

ArXiv, 2021
Copyright held by the owner/author(s). CHI’21, May 8-13, 2021, Online Virtual Conference ACM 978-... more Copyright held by the owner/author(s). CHI’21, May 8-13, 2021, Online Virtual Conference ACM 978-1-4503-6819-3/20/04. https://doi.org/10.1145/3334480.XXXXXXX Abstract Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs [14]. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders [2]. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.
Uploads
Papers by madhulika srikumar