Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, Computers & Operations Research
…
36 pages
1 file
To be fair or efficient or a bit of both. Originally published in Computers & Operations Research, 35(12),[3787][3788][3789][3790][3791][3792][3793][3794][3795][3796][3797][3798][3799][3800][3801][3802][3803][3804][3805][3806]
Management Science, 2012
This paper deals with a basic issue: How does one approach the problem of designing the “right” objective for a given resource allocation problem? The notion of what is right can be fairly nebulous; we consider two issues that we see as key: efficiency and fairness. We approach the problem of designing objectives that account for the natural tension between efficiency and fairness in the context of a framework that captures a number of resource allocation problems of interest to managers. More precisely, we consider a rich family of objectives that have been well studied in the literature for their fairness properties. We deal with the problem of selecting the appropriate objective from this family. We characterize the trade-off achieved between efficiency and fairness as one selects different objectives and develop several concrete managerial prescriptions for the selection problem based on this trade-off. Finally, we demonstrate the value of our framework in a case study that cons...
Res Publica, 2022
With the increasing use of algorithms in high-stakes areas such as criminal justice and health has come a significant concern about the fairness of prediction-based decision procedures. In this article I argue that a prominent class of mathematically incompatible performance parity criteria can all be understood as applications of John Broome's account of fairness as the proportional satisfaction of claims. On this interpretation these criteria do not disagree on what it means for an algorithm to be fair. Rather they express different understandings of what grounds a claim to a good being allocated by an algorithmic decision procedure. I then argue that an important implication of the Broomean interpretation is that it strengthens the case for outcome-based criteria. Finally, I consider how a version of the levelling-down objection to performance parity criteria arises within the Broomean account.
2008
Page 1. FAIR AND EFFICIENT RESOURCE ALLOCATION Bicriteria Models for Equitable Optimization Włodzimierz Ogryczak Institute of Control & Computation Engineering, Warsaw University of Technology, Nowowiejska 15/19, Warsaw, Poland [email protected] ...
AI & Society
I consider statistical criteria of algorithmic fairness from the perspective of the _ideals_ of fairness to which these criteria are committed. I distinguish and describe three theoretical roles such ideals might play. The usefulness of this program is illustrated by taking Base Rate Tracking and its ratio variant as a case study. I identify and compare the ideals of these two criteria, then consider them in each of the aforementioned three roles for ideals. This ideals program may present a way forward in the normative evaluation of candidate statistical criteria of algorithmic fairness.
2020
An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence (AI) algorithms in spheres ranging from healthcare, transportation, and education to college admissions, recruitment, provision of loans and many more realms. Since they now touch on many aspects of our lives, it is crucial to develop AI algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness, even when there is no intention for it. This paper presents an overview of the main concepts of identifying, measuring and improving algorithmic fairness when using AI algorithms. The paper begins by discussing the causes of algorithmic bias and unfairness and the common definitions and measures for fairness. Fairness-enhancing mechanisms are then reviewed and divided into pre-process, in-process and post-process mechanisms. A comprehensive comparison of the mechani...
AI & society, 2022
Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to the score, and adopting decisions based on the classification. Throughout our inquiry we use the COMPAS system, complemented by a radical simplification of it (our SAPMOC I and SAPMOC II models), as our running examples. Through these examples, we show how a system that is equally accurate for different groups may fail to comply with group-parity standards, owing to different base rates in the population. We discuss the general properties of the statistics determining the satisfaction of group-parity criteria and levels of accuracy. Using the distinction between scoring, classifying, and deciding, we argue that equalisation of classifications/decisions between groups can be achieved thorough group-dependent thresholding. We discuss contexts in which this approach may be meaningful and useful in pursuing policy objectives. We claim that the implementation of group-parity standards should be left to competent human decision-makers, under appropriate scrutiny, since it involves discretionary value-based political choices. Accordingly, predictive systems should be designed in such a way that relevant policy goals can be transparently implemented. Our paper presents three main contributions: (1) it addresses a complex predictive system through the lens of simplified toy models; (2) it argues for selective policy interventions on the different steps of automated decision-making; (3) it points to the limited significance of statistical notions of fairness to achieve social goals.
arXiv (Cornell University), 2022
In prediction-based decision-making systems, different perspectives can be at odds: The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly. Balancing these two perspectives is a question of values. However, these values are often hidden in the technicalities of the implementation of the decision-making system. In this paper, we propose a framework to make these value-laden choices clearly visible. We focus on a setting in which we want to find decision rules that balance the perspective of the decision maker and of the decision subjects. We provide an approach to formalize both perspectives, i.e., to assess the utility of the decision maker and the fairness towards the decision subjects. In both cases, the idea is to elicit values from decision makers and decision subjects that are then turned into something measurable. For the fairness evaluation, we build on well-known theories of distributive justice and on the algorithmic literature to ask what a fair distribution of utility (or welfare) looks like. This allows us to derive a fairness score that we then compare to the decision maker's utility. As we focus on a setting in which we are given a trained model and have to choose a decision rule, we use the concept of Pareto efficiency to compare decision rules. Our proposed framework can both guide the implementation of a decision-making system and help with audits, as it allows us to resurface the values implemented in a decision-making system.
Amicus Curiae, 2019
This article discusses conceptions of fairness in algorithmic decision-making, within the context of the UK’s legal system. Using practical operational examples of algorithmic tools, itargues that such practices involve inherent technical trade-offs over multiple, competing notions of fairness, which are further exacerbated by policy choices made by those public authorities who use them. This raises major concerns regarding the ability of such choices to affect legal issues in decision-making, and transform legal protections, without adequate legal oversight, or a clear legal framework. This is not to say that the law does not have the capacity to regulate and ensure fairness, but that a more expansive idea of its function is required.
Philosophy & Technology
We introduce a fairness criterion that we call Spanning. Spanning i) is implied by Calibration, ii) retains interesting properties of Calibration that some other ways of relaxing that criterion do not, and iii) unlike Calibration and other prominent ways of weakening it, is consistent with Equalized Odds outside of trivial cases.
Arxiv preprint arXiv:1112.2127, 2011
The paper serves as the first contribution towards the development of the theory of efficiency: a unifying framework for the currently disjoint theories of information, complexity, communication and computation. Realizing the defining nature of the brute force approach in the fundamental concepts in all of the above mentioned fields, the paper suggests using efficiency or improvement over the brute force algorithm as a common unifying factor necessary for the creation of a unified theory of information manipulation. By defining such diverse terms as randomness, knowledge, intelligence and computability in terms of a common denominator we are able to bring together contributions from Shannon, Levin, Kolmogorov, Solomonoff, Chaitin, Yao and many others under a common umbrella of the efficiency theory.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Minds and Machines
Data Mining and Knowledge Discovery , 2020
Journal of Applied Mathematics, 2014
Bulletin of Electrical Engineering and Informatics
New Statesman, 2020
Res Publica, 2023
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019
MinMax fairness: from Rawlsian Theory of Justice to solution for algorithmic bias, 2022
Computers & Education, 2004
Journal of Law, Technology and Policy, 2018