Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, Nature Machine Intelligence
…
3 pages
1 file
As artificial intelligence (AI) becomes crucial in society, establishing a framework to connect algorithm interpretability with public trust is essential. This paper discusses how recent regulatory trends have led to increased demands for transparency in AI systems, emphasizing the need for accessible explanations tailored to various stakeholders. By exploring the nature of interpretability, the paper raises critical questions about what explanations are necessary, to whom they should be directed, and how their effectiveness can be assessed, ultimately aiming to enhance trust and accountability in algorithm-assisted decision-making.
AI & SOCIETY, 2020
The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come toperceiveAI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision...
This panel will explore algorithmic authority as it manifests and plays out across multiple domains. Algorithmic authority refers to the power of algorithms to manage human action and influence what information is accessible to users. Algorithms increasingly have the ability to affect everyday life, work practices, and economic systems through automated decision-making and interpretation of " big data ". Cases of algorithmic authority include algorithmically curating news and social media feeds, evaluating job performance, matching dates, and hiring and firing employees. This panel will bring together researchers of quantified self, healthcare, digital labor, social media, and the sharing economy to deepen the emerging discourses on the ethics, politics, and economics of algorithmic authority in multiple domains.
Information Systems Frontiers
example the responsible design (Dennehy et al., 2021) and governance (Mäntymäki et al., 2022b) of AI systems. While organisations are increasingly investing in ethical AI and Responsible AI (RAI) (Zimmer et al., 2022), recent reports suggest that this comes at a cost and may lead to burnout in responsible-AI teams (Heikkilä, 2022). Thus, it is critical to consider how we educate about RAI (Grøder et al., 2022) and rethink our traditional learning designs (Pappas & Giannakos, 2021), as this can influence end-users' perceptions towards AI applications (Schmager et al., 2023) as well as how future employees approach the design and implementation of AI applications (Rakova et al., 2021; Vassilakopoulou et al., 2022). The use of algorithmic decision-making and decisionsupport processes, particularly AI is becoming increasingly pervasive in the public sector, also in high-risk application areas such as healthcare, traffic, and finance (European Commission, 2020). Against this backdrop, there is growing concern over the ethical use and safety of AI, fuelled by reports of ungoverned military applications (Butcher and Beridze, 2019; Dignum, 2020), privacy violations attributed to facial recognition technologies used by the police (Rezende, 2022), unwanted biases exhibited by AI applications used by courts (Imai et al., 2020), and racial biases in clinical algorithms (Vyas et al. 2020). The opacity and lack of explainability frequently attributed to AI systems makes evaluating the trustworthiness of algorithmic decisions challenging even for technical experts, let alone the public. Together with the algorithm-propelled proliferation of misinformation, hate speech and polarising content on social media platforms, there is a high risk for erosion of trust in algorithmic systems used by the public sector (Janssen et al., 2020). Ensuring that people can trust in the algorithmic processes is essential not only for reaping the potential benefits from AI (Dignum, 2020) but also for fostering trust and resilience at a societal level. AI researchers and practitioners have expressed their fears about AI systems being developed that are
Social Science Computer Review, 2020
Computational artificial intelligence (AI) algorithms are increasingly used to support decision making by governments. Yet algorithms often remain opaque to the decision makers and devoid of clear explanations for the decisions made. In this study, we used an experimental approach to compare decision making in three situations: humans making decisions (1) without any support of algorithms, (2) supported by business rules (BR), and (3) supported by machine learning (ML). Participants were asked to make the correct decisions given various scenarios, while BR and ML algorithms could provide correct or incorrect suggestions to the decision maker. This enabled us to evaluate whether the participants were able to understand the limitations of BR and ML. The experiment shows that algorithms help decision makers to make more correct decisions. The findings suggest that explainable AI combined with experience helps them detect incorrect suggestions made by algorithms. However, even experienc...
Business Ethics Quarterly, 2021
Businesses increasingly rely on algorithms that are data-trained sets of decision rules in order to implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a "right to explanation." Our contention is that we can address much of the problem of algorithmic transparency by rethinking the right to informed consent in the age of artificial intelligence. It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction with individual autonomy as its moral foundation. Such a view is insufficient, especially when data is used in a secondary, non-contextual, and unpredictable manner-which is the inescapable nature of advanced AI systems. We submit that an alternative view of informed consent-as an assurance of trust for incomplete transactions-allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.
Proceedings of the 53rd Hawaii International Conference on System Sciences, 2020
We explore how people developing or using a system with a machine-learning (ML) component come to understand the capabilities and challenges of ML. We draw on the social construction of technology (SCOT) tradition to frame our analysis of interviews and discussion board posts involving designers and users of a ML-supported citizen-science crowdsourcing project named Gravity Spy. We extend SCOT by anchoring our investigation in the different uses of the technology. We find that the type of understandings achieved by groups having less interaction with the technology is shaped more by outside influences and less by the specifics of the system and its role in the project. This initial understanding of how different participants understand and engage with ML points to challenges that need to be overcome to help users of a system deal with the opaque position that ML often holds in a work system.
Business Ethics Quarterly
Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction. Such a view is insufficient, especially when data are used in a secondary, noncontextual, and unpredictable manner—which is the inescapable nature of advanced artificial intelligence systems. We submit that an alternative view of informed consent—as an assurance of trust for incomplete transactions—allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.
Cambridge University Press eBooks, 2021
2023 ACM Conference on Fairness, Accountability, and Transparency
Public attention towards explainability of artificial intelligence (AI) systems has been rising in recent years to offer methodologies for human oversight. This has translated into the proliferation of research outputs, such as from Explainable AI, to enhance transparency and control for system debugging and monitoring, and intelligibility of system process and output for user services. Yet, such outputs are difficult to adopt on a practical level due to a lack of a common regulatory baseline, and the contextual nature of explanations. Governmental policies are now attempting to tackle such exigence, however it remains unclear to what extent published communications, regulations, and standards adopt an informed perspective to support research, industry, and civil interests. In this study, we perform the first thematic and gap analysis of this plethora of policies and standards on explainability in the EU, US, and UK. Through a rigorous survey of policy documents, we first contribute an overview of governmental regulatory trajectories within AI explainability and its sociotechnical impacts. We find that policies are often informed by coarse notions and requirements for explanations. This might be due to the willingness to conciliate explanations foremost as a risk management tool for AI oversight, but also due to the lack of a consensus on what constitutes a valid algorithmic explanation, and how feasible the implementation and deployment of such explanations are across stakeholders of an organization. Informed by AI explainability research, we then conduct a gap analysis of existing policies, which leads us to formulate a set of recommendations on how to address explainability in regulations for AI systems, especially discussing the definition, feasibility, and usability of explanations, as well as allocating accountability to explanation providers.
ArXiv, 2021
Copyright held by the owner/author(s). CHI’21, May 8-13, 2021, Online Virtual Conference ACM 978-1-4503-6819-3/20/04. https://doi.org/10.1145/3334480.XXXXXXX Abstract Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs [14]. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders [2]. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Media Theory , 2023
AI and Transparency, 2023
ArXiv, 2020
Information, Communication & Society, 2018
Digital Governance: Confronting the Challenges Posed by Artificial Intelligence, 2024
Journal of Metaverse, 2023
Data & Policy, 2022
International Journal of Advanced Computer Science and Applications, 2025
Big Data & Society, 2019
AI & SOCIETY, 2022
IEEE Technology and Society Magazine, 2023