
Richard Warner
Richard Warner is Professor and Norman and Edna Freehling Scholar at the Chicago-Kent College of Law, where he is also the Faculty Director of Chicago-Kent’s Center for Law and Computers. From 1994 to 1996, he was president of InterActive Computer Tutorials, a software company. From 1998 to 2000, he was director of Building Businesses on the Web, an Illinois Institute of Technology executive education program. He was the principal investigator for "Using Education to Combat White Collar Crime," a U.S. State Department grant devoted to combating money laundering in Ukraine from 2000 to 2006. He is currently a member of the U.S. Secret Service’s Electronic and Financial Crimes Taskforce. He is the co-founder and Director of the School of American Law, and the Co-Director of the Center for National Security and Human Rights. He holds a B.A. (English literature) from Stanford, a Ph. D. (Philosophy) from the University of California, Berkeley, and a J. D. from the University of Southern California. His most recent books are (all co-authored with Robert Sloan) Unauthorized Access: The Crisis in Online Privacy and Security, Data Breaches: Why Don’t We Defend Better?, and The Privacy Fix: How to Preserve Privacy in the Onslaught of Surveillance.
less
Related Authors
Stephen Neale
Graduate Center of the City University of New York
Constant Bonard
University of Bern
Kent Bach
San Francisco State University
InterestsView All (7)
Uploads
Papers by Richard Warner
Our explanation is that the foundation is incomplete. Patterns of scholarship have multiple explanations, but an incomplete foundation is a key factor in the case of privacy in public. You cannot build on what will not bear the weight. The missing foundational element is common knowledge. Common knowledge is the recursive belief state of two or more parties knowing, knowing they know, knowing they know they know, and so on potentially ad infinitum.
Appeals to common knowledge in the privacy literature are nonexistent. That lack of attention is of a piece with a general tendency to overlook common knowledge. Apart from game theory, philosophy, and sociology, common knowledge has had little visibility—even though, as an influential article notes, “much of social life is affected by common-knowledge generators, . . . [and] an acknowledgement of the role of common knowledge in enabling coordination can unify and explain a variety of seemingly unrelated and puzzling phenomena.”
Unifying and explaining is our goal as we show how to use norms to preserve informational privacy in the face of pervasive surveillance. Pervasive surveillance subverts the power of social roles to generate common knowledge thereby undermines coordination under informational norms thus reducing privacy in public and threatening self-realization. An essential step in preserving privacy is ensuring that social roles generate common knowledge that facilitates coordination under informational-privacy-creating social norms. We show how to take that step.
How do we regulate AI to preserve freedom? The article characterizes the relevant notion of acting freely and explains how AI interferes with free action so understood. To maximize freedom, AI-driven decisions should—ideally--meet the following Knowledge Condition: The decisions should be ones that those subject to them would know to be best justified after unimpaired reasoning under ideal conditions of adequate time and information. The Knowledge Condition will not be fulfilled across the board in contemporary societies characterized by conflicting moral social, and political views. It can at best be approximated, and the section explains what counts as approximation. Significantly society-wide approximation is unlikely unless people share sufficiently similar (not necessarily the same) standards of justification. In contemporary highly fractionated societies, that would require significant social change.
So why have twenty years of criticism been so ineffective in turning the tide from Notice and Choice to the social norm alternative? One plausible factor is that the Notice and Choice criticisms detail the flaws but do not adequately motivate the turn to the social norms. A motivationally compelling critique would show how and why the failure of Notice and Choice, properly understood, reveals the undeniable need for the collective control alternative provide by social norms. That does not yet exist in the Notice and Choice literature. Notice and Choice Must Go: The Collective Control Alternative remedies that lack.
Beyond Bias adapts the fairness constraints that it reexpresses from the Yale economist John Roemer. As Roemer notes in Equality of Opportunity, a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Beyond Bias shows that AI systems can unfairly tilt the playing field. The reason lies in the pervasive (and unavoidable) use of “proxy variables”—e. g., using credit ratings to predict driving safety (as many insurance companies do). The credit ratings are the substitute—the proxy—for details about individuals’ driving practices. Beyond Bias is the first article to apply a level playing field concept of fairness to issues of fairness in AI systems.
Beyond Bias briefly reviews the history of the use of proxy variables to evaluate consumers from the late Nineteenth Century to the present. It was already clear at the close of the Nineteenth Century that proxy-driven analysis could make seemingly unrelated aspects of one’s life “have a profound impact on [one’s] future potential in matters economic or social,” as Dan Bouk notes in HOW OUR DAYS BECAME NUMBERED: RISK AND THE RISE OF THE STATISTICAL INDIVIDUAL. The concern was that proxy-driven analysis would unfairly tilt the playing field, and that concern continues to this day.
Beyond Bias outlines a regulatory approach that ensures level playing field fairness by incorporating its mathematical constraints on AI systems.
A common knowledge-based approach to pragmatics would answer at least the following question: What informative generalizations are possible about the role of contexts in generating common knowledge relevant to conversational implicatures? The question is important. “Much has been learned about these domains of psychology from a focus on the problem of altruistic cooperation and the mechanisms of reciprocity. We hope that comparable insights are waiting to be discovered by psychologists as they investigate the problem of mutualistic cooperation, and the mechanisms of common knowledge are—as we might say—put out there” (Thomas et al., “The Psychology of Coordination and Common Knowledge”).
Our starting point for addressing fairness is the economist John Roemer’s observation in Equality of Opportunity that a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Does the insurance company unfairly tilt the playing field against Sally when it uses her credit score to set her insurance premium? More generally, as the Sally example illustrates, one factor that affects level-playing-field fairness is the social structure of information processing itself. The use of proxies can profoundly alter the social structure of information processing. When does their use do so unfairly? To address that question, we adapt an approach suggested in an influential article by the computer scientist Cynthia Dwork (who cites Roemer as a source of her approach).
Computer science has recently seen an explosion of articles about AI and fairness, and one of our goals is to bring those discussions more centrally into the discussion of legal scholars. It may seem we have chosen badly, however. One criticism of Dwork et al. is that the fairness criterion she offers is of little practical value since it requires determining relevant differences among people in ways that are—or at least appear to be—highly problematic in in real-life cases. We defend the approach against this criticism by showing how a regulatory process addressing the use of proxies in AI could make reasonable determinations of relevant differences among individuals and assign an important to the Dwork et al. criterion of fairness. The regulatory process would promote level-playing-field fairness even in the face of proxy-driven AI.
Our starting point for addressing fairness is the economist John Roemer’s observation in Equality of Opportunity that a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Does the insurance company unfairly tilt the playing field against Sally when it uses her credit score to set her insurance premium? More generally, as the Sally example illustrates, one factor that affects level-playing-field fairness is the social structure of information processing itself. The use of proxies can profoundly alter the social structure of information processing. When does their use do so unfairly? To address that question, we adapt an approach suggested in an influential article by the computer scientist Cynthia Dwork (who cites Roemer as a source of her approach).
Computer science has recently seen an explosion of articles about AI and fairness, and one of our goals is to bring those discussions more centrally into the discussion of legal scholars. It may seem we have chosen badly, however. One criticism of Dwork et al. is that the fairness criterion she offers is of little practical value since it requires determining relevant differences among people in ways that are—or at least appear to be—highly problematic in in real-life cases. We defend the approach against this criticism by showing how a regulatory process addressing the use of proxies in AI could make reasonable determinations of relevant differences among individuals and assign an important to the Dwork et al. criterion of fairness. The regulatory process would promote level-playing-field fairness even in the face of proxy-driven AI.
the accountability mechanisms and legal standards that govern decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed primarily to oversee human decisionmakers. Many observers have argued that our current frameworks are not well adapted for situations in which a potentially incorrect, unjustified, or unfair outcome emerges from a computer. Citizens, and society as a whole, have an interest in making these processes more accountable. If these new inventions are to be made governable, this gap must be bridged. (Joshua A. Kroll et al., Accountable Algorithms, 165 UNIV. PA. LAW REV. 663 (2017).)
How do you build the bridge?
We divide the bridge-building task into three questions. First, what features of the use of predictive analytics significantly contribute to “incorrect, unjustified, or unfair” outcomes? Second, how should one regulate those features to make outcomes more acceptable? Third, how can one ensure that the use of predictive analytics sufficiently respects human freedom? The concern with freedom arises because you are not free when you are subject to the arbitrary will another, and predictive analytics is no exception. It violates your freedom when it pushes you down an arbitrary and capricious path. It also violates your freedom when you have no practical alternative but to submit to its decisions without knowing whether there are adequate reasons for the decisions, reasons that show the decisions are not arbitrary and capricious. You are also not free if you are subject to the will of another and denied any knowledge of whether that will is arbitrary and capricious. Thus, respecting freedom requires meeting, or at least sufficiently closely approximating, the following knowledge condition: those subject to the use of predictive analytics know that there are adequate reasons for its decisions.
We answer the first question by “profiling” uses of predictive analytics. We adapt the idea of profiling people. A profile of a person is a summary of characteristics relevant to evaluating and predicting the person’s behavior. Our profile consists of five features that significantly affect the extent to which a system will yield “incorrect, unjustified, or unfair” decisions. We answer the second question by explaining how to control predictive systems by regulating the features the profile identifies. Along with others, we propose that a government agency regulate the use of predictive systems. The novel feature of our approach is the use of legal regulation to unify consumer demand in ways that create a type of norm extensive studied in game theory, a coordination norm. The norm coordinates consumer/seller activity thereby creating a non-legal, market incentives to minimize “incorrect, unjustified, or unfair” decisions. Our answer to the second question is the basis for the answer to the third. We appeal to coordination norms to explain how to meet the knowledge requirement. We show how consumers can come to know that adequate reasons exist through being participants in a coordination norm. The norm makes predictive systems transparent in a way that facilitates fulfillment of the knowledge requirement and avoids unfair and otherwise objectionable outcomes.