Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019
…
7 pages
1 file
Fairness in machine-assisted decision making is critical to consider, since a lack of fairness can harm individuals or groups, erode trust in institutions and systems, and reinforce structural discrimination. To avoid making ethical mistakes, or amplifying them, it is important to ensure that the algorithms we develop are fair and promote trust. We argue that the marriage of techniques from behavioral science and computer science is essential to develop algorithms that make ethical decisions and ensure the welfare of society. Specifically, we focus on the role of procedural justice, moral cognition, and social identity in promoting trust in algorithms and offer a road map for future research on the topic.
As AI changes the way decisions are made in organizations and governments, it is ever more important to ensure that these systems work according to values that diverse users and groups find important. Researchers have proposed numerous algorithmic techniques to formalize statistical fairness notions, but emerging work suggests that AI systems must account for the real-world contexts in which they will be embedded in order to actually work fairly. These findings call for an expanded research focus beyond statistical fairness to that which includes fundamental understandings of human uses and the social impacts of AI systems, a theme central to the HCI community. The HCI community can contribute novel understandings, methods, and techniques for incorporating human values and cultural norms into AI systems; address human biases in developing and using AI; and empower individual users and society to audit and control AI systems. Our goal is to bring together academic and industry researchers in the fields of HCI, ML and AI, and the social sciences to devise a cross-disciplinary research agenda for fair and responsible AI systems. This workshop will build on previous algorithmic fairness workshops at AI and ML conferences, map research and design opportunities for future innovations, and disseminate them in each community.
Business & Information Systems Engineering, 2023
Artificial Intelligence and Law, 2022
Artificial Intelligence and algorithms are increasingly able to replace human workers in cognitively sophisticated tasks, including ones related to justice. Many governments and international organizations are discussing policies related to the application of algorithmic judges in courts. In this paper, we investigate the public perceptions of algorithmic judges. Across two experiments (N = 1,822), and an internal meta-analysis (N = 3,039), our results show that even though court users acknowledge several advantages of algorithms (i.e., cost and speed), they trust human judges more and have greater intentions to go to the court when a human (vs. an algorithmic) judge adjudicates. Additionally, we demonstrate that the extent that individuals trust algorithmic and human judges depends on the nature of the case: trust for algorithmic judges is especially low when legal cases involve emotional complexities (vs. technically complex or uncomplicated cases).
Leonardo, 2019
AI and Ethics, 2021
There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented within narrow and targeted fairness toolkits for algorithm assessments that are difficult to integrate into an algorithm’s broader ethical assessment. In this paper, we derive lessons from ethical philosophy and welfare economics as they relate to the contextual factors relevant for fairness. In particular we highlight the debate around the acceptability of particular inequalities and the inextricable links between fairness, welfare and a...
Applied Sciences
In this study, we analyze “Discrimination”, ”Bias”, “Fairness”, and “Trustworthiness” as working variables in the context of the social impact of AI. It has been identified that there exists a set of specialized variables, such as security, privacy, responsibility, etc., that are used to operationalize the principles in the Principled AI International Framework. These variables are defined in such a way that they contribute to others of more general scope, for example, the ones studied in this study, in what appears to be a generalization–specialization relationship. Our aim in this study is to comprehend how we can use available notions of bias, discrimination, fairness, and other related variables that will be assured during the software project’s lifecycle (security, privacy, responsibility, etc.) when developing trustworthy algorithmic decision-making systems (ADMS). Bias, discrimination, and fairness are mainly approached with an operational interest by the Principled AI Intern...
arXiv (Cornell University), 2022
The paper offers a contribution to the interdisciplinary constructs of analyzing fairness issues in automatic algorithmic decisions. Section 1 shows that technical choices in supervised learning have social implications that need to be considered. Section 2 proposes a contextual approach to the issue of unintended group discrimination, i.e. decision rules that are facially neutral but generate disproportionate impacts across social groups (e.g., gender, race or ethnicity). The contextualization will focus on the legal systems of the United States on the one hand and Europe on the other. In particular, legislation and case law tend to promote different standards of fairness on both sides of the Atlantic. Section 3 is devoted to the explainability of algorithmic decisions; it will confront and attempt to cross-reference legal concepts (in European and French law) with technical concepts and will highlight the plurality, even polysemy, of European and French legal texts relating to the explicability of algorithmic decisions. The conclusion proposes directions for further research.
2021
Automated decision-making systems implemented in public life are typically standardized. One algorithmic decision-making system can replace thousands of human deciders. Each of the humans so replaced had her own decision-making criteria: some good, some bad, and some arbitrary. Is such arbitrariness of moral concern? We argue that an isolated arbitrary decision need not morally wrong the individual whom it misclassifies. However, if the same algorithms are applied across a public sphere, such as hiring or lending, a person could be excluded from a large number of opportunities. This harm persists even when the automated decision-making systems are "fair" on standard metrics of fairness. We argue that such arbitrariness at scale is morally problematic and propose technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harms we identify.
AI & Ethics, 2022
The use of predictive machine learning algorithms is increasingly common to guide or even take decisions in both public and private settings. Their use is touted by some as a potentially useful method to avoid discriminatory decisions since they are, allegedly, neutral, objective, and can be evaluated in ways no human decisions can. By (fully or partly) outsourcing a decision process to an algorithm, it should allow human organizations to clearly define the parameters of the decision and to, in principle, remove human biases. Yet, in practice, the use of algorithms can still be the source of wrongful discriminatory decisions based on at least three of their features: the data-mining process and the categorizations they rely on can reconduct human biases, their automaticity and predictive design can lead them to rely on wrongful generalizations, and their opaque nature is at odds with democratic requirements. We highlight that the two latter aspects of algorithms and their significance for discrimination are too often overlooked in contemporary literature. Though these problems are not all insurmountable, we argue that it is necessary to clearly define the conditions under which a machine learning decision tool can be used. We identify and propose three main guidelines to properly constrain the deployment of machine learning algorithms in society: algorithms should be vetted to ensure that they do not unduly affect historically marginalized groups; they should not systematically override or replace human decision-making processes; and the decision reached using an algorithm should always be explainable and justifiable.
2020
This panel will discuss the problems of bias and fairness in organizational use of AI algorithms. The panel will first put forth key issues regarding biases that arise when AI algorithms are applied to organizational processes. We will then propose a sociotechnical approach to bias mitigation. We will further share proposals for companies and policymakers on improving AI algorithmic fairness within organizations and bias mitigation. The panel will bring together scholars examining social and technical aspects of bias and its mitigation, from the perspective of information systems, ethics, machine learning, robotics, and human capital. The panel will end with an open discussion of where the field of information systems can step in to guide fairness and ethical use in AI algorithms in the coming years.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IAEME PUBLICATION, 2019
Minds and machines, 2024
Machine Learning and Knowledge Extraction
ArXiv, 2021
Philosophy & Technology, 2017
Discover Artificial Intelligence, 2023
2020
Equity and Access in Algorithms, Mechanisms, and Optimization, 2021
Res Publica, 2023
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2023
Bulletin of Electrical Engineering and Informatics
Equity and Access in Algorithms, Mechanisms, and Optimization
MATEC Web of Conferences, 2021
International journal of management & entrepreneurship research, 2024
IEEE Technology and Society Magazine, 2021