Unauthorized access to online information costs billions of dollars per year. Software vulnerabilities are a key. Software currently contains an unacceptable number of vulnerabilities. The standard solution notes that the typical software... more
Unauthorized access to online information costs billions of dollars per year. Software vulnerabilities are a key. Software currently contains an unacceptable number of vulnerabilities. The standard solution notes that the typical software business strategy is to keep costs down and be the first to market even if that means the software has significant vulnerabilities. Many endorse the following remedy: make software developers liable for negligent or defective design. This remedy is unworkable. We offer an alternative based on an appeal to product-risk norms. Product-risk norms are social norms that govern the sale of products. A key feature of such norms is that they ensure that the design and manufacture of products impose only acceptable risks on buyers. Unfortunately, mass-market software sales are not governed by appropriate product-risk norms; as result, market conditions exist in which sellers profitably offer vulnerability-ridden software. This analysis entails a solution: ensure that appropriate norms exist. We contend that the best way to do so is a statute based on best practices for software development, and we define the conditions under which the statute would give rise to the desired norm. Why worry about creating the norm? Why not just legally require that software developers conform to best practices. The answer is that enforcement of legal’s requirement can be difficult, costly, and uncertain; once the norm is in place, however, buyers and software developers conform on their own initiative.
- by Richard Warner
- •
Predictions of transformative change surround Big Data. It is routine to read, for example, that -with the coming of Big Data, we are going to be operating very much out of our old, familiar ballpark.‖ 1 But, as both Niels Bohr and Yogi... more
Predictions of transformative change surround Big Data. It is routine to read, for example, that -with the coming of Big Data, we are going to be operating very much out of our old, familiar ballpark.‖ 1 But, as both Niels Bohr and Yogi Berra are reputed to have observed, -Prediction is difficult, especially about the future.‖ And, they might have added, especially regarding the effects of major technological change. In the Railroad Mania of nineteenth century England, for example, some made the typical prediction that a new communication network meant the end of an old one: namely, that that face-to-face communication over the emerging railroad network would entail a drastic drop in postal mail. In fact, mail volume increased. 2 Given the difficulty of forecasting transformative change, we opt for a -prediction‖ about the present: Big Data already presents a -new‖ and important privacy challenge. As the scare quotes indicate, the challenge is not truly new. What Big Data does is compel confrontation with a difficult trade-off problem that has been glossed over or even ignored up to now. It does so because both the potential benefits and risks from Big Data analysis are so much larger than anything we have seen before.
- by Richard Warner
- •
Predictive analytics (data mining, machine learning, and artificial intelligence) drives algorithmic decision making. Its “all-encompassing scope already reaches the very heart of a functioning society” (ERIC SIEGEL, PREDICTIVE ANALYTICS:... more
Predictive analytics (data mining, machine learning, and artificial intelligence) drives algorithmic decision making. Its “all-encompassing scope already reaches the very heart of a functioning society” (ERIC SIEGEL, PREDICTIVE ANALYTICS: THE POWER TO PREDICT WHO WILL CLICK, BUY, LIE, OR DIE (2016)). Unfortunately,
the accountability mechanisms and legal standards that govern decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed primarily to oversee human decisionmakers. Many observers have argued that our current frameworks are not well adapted for situations in which a potentially incorrect, unjustified, or unfair outcome emerges from a computer. Citizens, and society as a whole, have an interest in making these processes more accountable. If these new inventions are to be made governable, this gap must be bridged. (Joshua A. Kroll et al., Accountable Algorithms, 165 UNIV. PA. LAW REV. 663 (2017).)
How do you build the bridge?
We divide the bridge-building task into three questions. First, what features of the use of predictive analytics significantly contribute to “incorrect, unjustified, or unfair” outcomes? Second, how should one regulate those features to make outcomes more acceptable? Third, how can one ensure that the use of predictive analytics sufficiently respects human freedom? The concern with freedom arises because you are not free when you are subject to the arbitrary will another, and predictive analytics is no exception. It violates your freedom when it pushes you down an arbitrary and capricious path. It also violates your freedom when you have no practical alternative but to submit to its decisions without knowing whether there are adequate reasons for the decisions, reasons that show the decisions are not arbitrary and capricious. You are also not free if you are subject to the will of another and denied any knowledge of whether that will is arbitrary and capricious. Thus, respecting freedom requires meeting, or at least sufficiently closely approximating, the following knowledge condition: those subject to the use of predictive analytics know that there are adequate reasons for its decisions.
We answer the first question by “profiling” uses of predictive analytics. We adapt the idea of profiling people. A profile of a person is a summary of characteristics relevant to evaluating and predicting the person’s behavior. Our profile consists of five features that significantly affect the extent to which a system will yield “incorrect, unjustified, or unfair” decisions. We answer the second question by explaining how to control predictive systems by regulating the features the profile identifies. Along with others, we propose that a government agency regulate the use of predictive systems. The novel feature of our approach is the use of legal regulation to unify consumer demand in ways that create a type of norm extensive studied in game theory, a coordination norm. The norm coordinates consumer/seller activity thereby creating a non-legal, market incentives to minimize “incorrect, unjustified, or unfair” decisions. Our answer to the second question is the basis for the answer to the third. We appeal to coordination norms to explain how to meet the knowledge requirement. We show how consumers can come to know that adequate reasons exist through being participants in a coordination norm. The norm makes predictive systems transparent in a way that facilitates fulfillment of the knowledge requirement and avoids unfair and otherwise objectionable outcomes.
the accountability mechanisms and legal standards that govern decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed primarily to oversee human decisionmakers. Many observers have argued that our current frameworks are not well adapted for situations in which a potentially incorrect, unjustified, or unfair outcome emerges from a computer. Citizens, and society as a whole, have an interest in making these processes more accountable. If these new inventions are to be made governable, this gap must be bridged. (Joshua A. Kroll et al., Accountable Algorithms, 165 UNIV. PA. LAW REV. 663 (2017).)
How do you build the bridge?
We divide the bridge-building task into three questions. First, what features of the use of predictive analytics significantly contribute to “incorrect, unjustified, or unfair” outcomes? Second, how should one regulate those features to make outcomes more acceptable? Third, how can one ensure that the use of predictive analytics sufficiently respects human freedom? The concern with freedom arises because you are not free when you are subject to the arbitrary will another, and predictive analytics is no exception. It violates your freedom when it pushes you down an arbitrary and capricious path. It also violates your freedom when you have no practical alternative but to submit to its decisions without knowing whether there are adequate reasons for the decisions, reasons that show the decisions are not arbitrary and capricious. You are also not free if you are subject to the will of another and denied any knowledge of whether that will is arbitrary and capricious. Thus, respecting freedom requires meeting, or at least sufficiently closely approximating, the following knowledge condition: those subject to the use of predictive analytics know that there are adequate reasons for its decisions.
We answer the first question by “profiling” uses of predictive analytics. We adapt the idea of profiling people. A profile of a person is a summary of characteristics relevant to evaluating and predicting the person’s behavior. Our profile consists of five features that significantly affect the extent to which a system will yield “incorrect, unjustified, or unfair” decisions. We answer the second question by explaining how to control predictive systems by regulating the features the profile identifies. Along with others, we propose that a government agency regulate the use of predictive systems. The novel feature of our approach is the use of legal regulation to unify consumer demand in ways that create a type of norm extensive studied in game theory, a coordination norm. The norm coordinates consumer/seller activity thereby creating a non-legal, market incentives to minimize “incorrect, unjustified, or unfair” decisions. Our answer to the second question is the basis for the answer to the third. We appeal to coordination norms to explain how to meet the knowledge requirement. We show how consumers can come to know that adequate reasons exist through being participants in a coordination norm. The norm makes predictive systems transparent in a way that facilitates fulfillment of the knowledge requirement and avoids unfair and otherwise objectionable outcomes.
- by Richard Warner
- •
Legal scholars have argued for twenty years that automated processing requires more transparency, but it is far from obvious what form such transparency should take [1]. The rise of data mining and predictive analytics makes the problem... more
Legal scholars have argued for twenty years that automated processing requires more transparency, but it is far from obvious what form such transparency should take [1]. The rise of data mining and predictive analytics makes the problem of transparency all the more pressing. Decision making is often divorced from immediate human control. In such cases, the human control consists only in the design decisions built into the predictive analytics algorithm and whatever post-decision review procedures, if any, there might be. Examples include the extension of credit, market and advertising decisions, sentencing and parole decisions, the selection of air travelers for search, choice of taxpayers for audits, the targeting of individuals and neighborhoods for police scrutiny, welfare and financial aid decisions, public health decisions, employee hiring, visa decisions, counting of votes, political campaign decisions, and business planning and supply chain management [1]. Predictive analytics has already yielded significant benefits. We take it for granted that it will continue to do so, and is in part for that reason well-entrenched. There are significant costs as well, however, and implementing an acceptable balance is an urgent problem. Solving that problem requires answers to two questions. What are the criteria of acceptability? And how do you tell whether a predictive system meets those criteria? Any answer to the second question requires that predictive systems be transparent. A physical item is transparent if you can see through it. By analogy, a decision procedure is transparent if the risks and benefits associated are readily ascertainable. What qualifies as "readily ascertainable" varies with the context; however, it is clear that, in general, predictive systems do not currently meet his requirement. As Kroll et al. note, the accountability mechanisms and legal standards that govern decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed primarily to oversee human decisionmakers. Many observers have argued that our current frameworks are not well adapted for situations in which a potentially incorrect, unjustified, or unfair outcome emerges from a computer. Citizens, and society as a whole, have an interest in making these processes more accountable. If these new inventions are to be made governable, this gap must be bridged [1]. We propose a way to bridge the gap. Our approach, spelled out in Sections III and IV, gives a central role to informational norms. This may seem far removed from concerns with algorithmic transparency, but part of our point is that it is not. We confine our attention to consumers engaged in commercial transactions with businesses, because this already raises most of the tradeoff questions between utility and acceptability that concern us. We propose the following condition on the transparency of predictive systems in such cases. Consumer-transparency: consumers should be able to readily ascertain the risks and benefits associated with the predictive systems to which they are subject. The rationale for this requirement is that consumers' decisions should be free and informed [2]. We put aside the important issue of how to ensure adequately free choice. Our concern here is with informed decisions. We consider how to ensure that consumer-transparency is fulfilled with regard to informational privacy. Informational privacy consists in the ability to control what information others have about you and what they do with it [3, p. 7]. The pervasive use of predictive analytics considerably reduces that ability.
- by Richard Warner
- •
Developers of predictive systems use proxies when they cannot directly observe attributes relevant to predictions they would like to make. Proxies have always been used, but today the use of proxies has the consequence that one area of... more
Developers of predictive systems use proxies when they cannot directly observe attributes relevant to predictions they would like to make. Proxies have always been used, but today the use of proxies has the consequence that one area of one’s life can have significant consequences for another seemingly disconnected area, and that raises concerns about fairness and freedom, as the following example illustrates. Sally defaults on a $50,000 credit card debt and declares bankruptcy. The debt was the result of paying for lifesaving treatment for her daughter, and despite her best efforts, she could not afford even the minimum credit card payments. A credit scoring system predicts that Sally is a poor risk even though post-bankruptcy Sally is a good risk—her daughter having recovered. Sally’s car insurance company uses credit ratings as proxy for safe driving (as many US insurance companies in fact do). Is it fair that Sally’s life-saving effort forces her down a disadvantageous path?
Our starting point for addressing fairness is the economist John Roemer’s observation in Equality of Opportunity that a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Does the insurance company unfairly tilt the playing field against Sally when it uses her credit score to set her insurance premium? More generally, as the Sally example illustrates, one factor that affects level-playing-field fairness is the social structure of information processing itself. The use of proxies can profoundly alter the social structure of information processing. When does their use do so unfairly? To address that question, we adapt an approach suggested in an influential article by the computer scientist Cynthia Dwork (who cites Roemer as a source of her approach).
Computer science has recently seen an explosion of articles about AI and fairness, and one of our goals is to bring those discussions more centrally into the discussion of legal scholars. It may seem we have chosen badly, however. One criticism of Dwork et al. is that the fairness criterion she offers is of little practical value since it requires determining relevant differences among people in ways that are—or at least appear to be—highly problematic in in real-life cases. We defend the approach against this criticism by showing how a regulatory process addressing the use of proxies in AI could make reasonable determinations of relevant differences among individuals and assign an important to the Dwork et al. criterion of fairness. The regulatory process would promote level-playing-field fairness even in the face of proxy-driven AI.
Our starting point for addressing fairness is the economist John Roemer’s observation in Equality of Opportunity that a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Does the insurance company unfairly tilt the playing field against Sally when it uses her credit score to set her insurance premium? More generally, as the Sally example illustrates, one factor that affects level-playing-field fairness is the social structure of information processing itself. The use of proxies can profoundly alter the social structure of information processing. When does their use do so unfairly? To address that question, we adapt an approach suggested in an influential article by the computer scientist Cynthia Dwork (who cites Roemer as a source of her approach).
Computer science has recently seen an explosion of articles about AI and fairness, and one of our goals is to bring those discussions more centrally into the discussion of legal scholars. It may seem we have chosen badly, however. One criticism of Dwork et al. is that the fairness criterion she offers is of little practical value since it requires determining relevant differences among people in ways that are—or at least appear to be—highly problematic in in real-life cases. We defend the approach against this criticism by showing how a regulatory process addressing the use of proxies in AI could make reasonable determinations of relevant differences among individuals and assign an important to the Dwork et al. criterion of fairness. The regulatory process would promote level-playing-field fairness even in the face of proxy-driven AI.
- by Richard Warner and +2
- •
Developers of predictive systems use proxies when they cannot directly observe attributes relevant to predictions they would like to make. Proxies have always been used, but today the use of proxies has the consequence that one area of... more
Developers of predictive systems use proxies when they cannot directly observe attributes relevant to predictions they would like to make. Proxies have always been used, but today the use of proxies has the consequence that one area of one’s life can have significant consequences for another seemingly disconnected area, and that raises concerns about fairness and freedom, as the following example illustrates. Sally defaults on a $50,000 credit card debt and declares bankruptcy. The debt was the result of paying for lifesaving treatment for her daughter, and despite her best efforts, she could not afford even the minimum credit card payments. A credit scoring system predicts that Sally is a poor risk even though post-bankruptcy Sally is a good risk—her daughter having recovered. Sally’s car insurance company uses credit ratings as proxy for safe driving (as many US insurance companies in fact do). Is it fair that Sally’s life-saving effort forces her down a disadvantageous path?
Our starting point for addressing fairness is the economist John Roemer’s observation in Equality of Opportunity that a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Does the insurance company unfairly tilt the playing field against Sally when it uses her credit score to set her insurance premium? More generally, as the Sally example illustrates, one factor that affects level-playing-field fairness is the social structure of information processing itself. The use of proxies can profoundly alter the social structure of information processing. When does their use do so unfairly? To address that question, we adapt an approach suggested in an influential article by the computer scientist Cynthia Dwork (who cites Roemer as a source of her approach).
Computer science has recently seen an explosion of articles about AI and fairness, and one of our goals is to bring those discussions more centrally into the discussion of legal scholars. It may seem we have chosen badly, however. One criticism of Dwork et al. is that the fairness criterion she offers is of little practical value since it requires determining relevant differences among people in ways that are—or at least appear to be—highly problematic in in real-life cases. We defend the approach against this criticism by showing how a regulatory process addressing the use of proxies in AI could make reasonable determinations of relevant differences among individuals and assign an important to the Dwork et al. criterion of fairness. The regulatory process would promote level-playing-field fairness even in the face of proxy-driven AI.
Our starting point for addressing fairness is the economist John Roemer’s observation in Equality of Opportunity that a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Does the insurance company unfairly tilt the playing field against Sally when it uses her credit score to set her insurance premium? More generally, as the Sally example illustrates, one factor that affects level-playing-field fairness is the social structure of information processing itself. The use of proxies can profoundly alter the social structure of information processing. When does their use do so unfairly? To address that question, we adapt an approach suggested in an influential article by the computer scientist Cynthia Dwork (who cites Roemer as a source of her approach).
Computer science has recently seen an explosion of articles about AI and fairness, and one of our goals is to bring those discussions more centrally into the discussion of legal scholars. It may seem we have chosen badly, however. One criticism of Dwork et al. is that the fairness criterion she offers is of little practical value since it requires determining relevant differences among people in ways that are—or at least appear to be—highly problematic in in real-life cases. We defend the approach against this criticism by showing how a regulatory process addressing the use of proxies in AI could make reasonable determinations of relevant differences among individuals and assign an important to the Dwork et al. criterion of fairness. The regulatory process would promote level-playing-field fairness even in the face of proxy-driven AI.
- by Richard Warner
- •
Suppose a speaker S and an audience A are in a communication coordination problem. That is, for some proposition p, they each prefer that S mean that p and that A believe p in response. How do they coordinate their thought and action to... more
Suppose a speaker S and an audience A are in a communication coordination problem. That is, for some proposition p, they each prefer that S mean that p and that A believe p in response. How do they coordinate their thought and action to solve the problem? The Gricean answer is that they reason their way to the solution. Pragmatics makes a similar assumption. “Pragmatics involves perception augmented by some species of 'ampliative' inference . . . a sort of reasoning” (Kepa Korta and John Perry, "Pragmatics"). There are two objections. The first is that it is not plausible to attribute such reasoning to speakers and audiences. The second objection grants, for the sake of argument, that speakers and audiences reason in the required ways. The objection is that this is not sufficient to solve coordination problems since a speaker and an audience may not know how each has reasoned. The problem is well-known in game theory, which typically solves it by assuming the parties’ preferences are common knowledge. Common knowledge is the recursive belief state in which people are know something, know they know it, know they know they know it, ad infinitum. I propose a similar solution. When people interact as speaker and audience, common knowledge arises from their perceptions of each other as fulfilling the roles of speaker and audience. The account of how common knowledge arises entails Grice’s general characterization of conversational implicatures. More precisely, it entails speakers and audiences involved in communication coordination problems typically commonly know that a relevant instance of the conditions holds. The common knowledge approach requires attributing reasoning to speakers and audiences, but it keeps those attributions to a plausible minimum. Thus the common knowledge approach is a viable alternative to current approaches that attribute extensive reasoning to speakers and audiences.
A common knowledge-based approach to pragmatics would answer at least the following question: What informative generalizations are possible about the role of contexts in generating common knowledge relevant to conversational implicatures? The question is important. “Much has been learned about these domains of psychology from a focus on the problem of altruistic cooperation and the mechanisms of reciprocity. We hope that comparable insights are waiting to be discovered by psychologists as they investigate the problem of mutualistic cooperation, and the mechanisms of common knowledge are—as we might say—put out there” (Thomas et al., “The Psychology of Coordination and Common Knowledge”).
A common knowledge-based approach to pragmatics would answer at least the following question: What informative generalizations are possible about the role of contexts in generating common knowledge relevant to conversational implicatures? The question is important. “Much has been learned about these domains of psychology from a focus on the problem of altruistic cooperation and the mechanisms of reciprocity. We hope that comparable insights are waiting to be discovered by psychologists as they investigate the problem of mutualistic cooperation, and the mechanisms of common knowledge are—as we might say—put out there” (Thomas et al., “The Psychology of Coordination and Common Knowledge”).
- by Richard Warner
- •
Artificial intelligence (AI) systems can discriminate against protected classes—a fact that has sparked an extensive literature about bias in AI. Bias, as important as it is, is a special case of the overall problem of social justice.... more
Artificial intelligence (AI) systems can discriminate against protected classes—a fact that has sparked an extensive literature about bias in AI. Bias, as important as it is, is a special case of the overall problem of social justice. Beyond Bias focuses on the general problem. It incorporates contributions from the extensive discussion of AI and fairness in the computer science literature. In particular, it draws on Fairness Through Awareness, an influential article by the Harvard computer scientist Cynthia Dwork and her co-authors. Adapting Dwork’s approach, Beyond Bias reexpresses intuitive, well-motivated fairness constraints in a more mathematical way that shows how to apply the constraints to mathematically and computationally complex AI systems. The mathematics nonetheless uses only elementary arithmetic (unlike Dwork et al.).
Beyond Bias adapts the fairness constraints that it reexpresses from the Yale economist John Roemer. As Roemer notes in Equality of Opportunity, a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Beyond Bias shows that AI systems can unfairly tilt the playing field. The reason lies in the pervasive (and unavoidable) use of “proxy variables”—e. g., using credit ratings to predict driving safety (as many insurance companies do). The credit ratings are the substitute—the proxy—for details about individuals’ driving practices. Beyond Bias is the first article to apply a level playing field concept of fairness to issues of fairness in AI systems.
Beyond Bias briefly reviews the history of the use of proxy variables to evaluate consumers from the late Nineteenth Century to the present. It was already clear at the close of the Nineteenth Century that proxy-driven analysis could make seemingly unrelated aspects of one’s life “have a profound impact on [one’s] future potential in matters economic or social,” as Dan Bouk notes in HOW OUR DAYS BECAME NUMBERED: RISK AND THE RISE OF THE STATISTICAL INDIVIDUAL. The concern was that proxy-driven analysis would unfairly tilt the playing field, and that concern continues to this day.
Beyond Bias outlines a regulatory approach that ensures level playing field fairness by incorporating its mathematical constraints on AI systems.
Beyond Bias adapts the fairness constraints that it reexpresses from the Yale economist John Roemer. As Roemer notes in Equality of Opportunity, a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Beyond Bias shows that AI systems can unfairly tilt the playing field. The reason lies in the pervasive (and unavoidable) use of “proxy variables”—e. g., using credit ratings to predict driving safety (as many insurance companies do). The credit ratings are the substitute—the proxy—for details about individuals’ driving practices. Beyond Bias is the first article to apply a level playing field concept of fairness to issues of fairness in AI systems.
Beyond Bias briefly reviews the history of the use of proxy variables to evaluate consumers from the late Nineteenth Century to the present. It was already clear at the close of the Nineteenth Century that proxy-driven analysis could make seemingly unrelated aspects of one’s life “have a profound impact on [one’s] future potential in matters economic or social,” as Dan Bouk notes in HOW OUR DAYS BECAME NUMBERED: RISK AND THE RISE OF THE STATISTICAL INDIVIDUAL. The concern was that proxy-driven analysis would unfairly tilt the playing field, and that concern continues to this day.
Beyond Bias outlines a regulatory approach that ensures level playing field fairness by incorporating its mathematical constraints on AI systems.
- by Richard Warner and +1
- •
Over twenty years of criticism conclusively confirm that Notice and Choice results in, as the law professor Fred Cate puts it, “the worst of all worlds: privacy protection is not enhanced, individuals and businesses pay the cost of... more
Over twenty years of criticism conclusively confirm that Notice and Choice results in, as the law professor Fred Cate puts it, “the worst of all worlds: privacy protection is not enhanced, individuals and businesses pay the cost of bureaucratic laws.” So why is it still the dominant legislative and regulatory approach to ensuring adequate informational privacy online? Recent implementations of Notice and Choice include the European Union’s General Data Protection Regulation, and California’s Consumer Protection Privacy Act. There is a well-known alternative (advanced by Helen Nissenbaum and others) that sees informational privacy as arising from social norms that require conformity to shared expectations about selective information flows.
So why have twenty years of criticism been so ineffective in turning the tide from Notice and Choice to the social norm alternative? One plausible factor is that the Notice and Choice criticisms detail the flaws but do not adequately motivate the turn to the social norms. A motivationally compelling critique would show how and why the failure of Notice and Choice, properly understood, reveals the undeniable need for the collective control alternative provide by social norms. That does not yet exist in the Notice and Choice literature. Notice and Choice Must Go: The Collective Control Alternative remedies that lack.
So why have twenty years of criticism been so ineffective in turning the tide from Notice and Choice to the social norm alternative? One plausible factor is that the Notice and Choice criticisms detail the flaws but do not adequately motivate the turn to the social norms. A motivationally compelling critique would show how and why the failure of Notice and Choice, properly understood, reveals the undeniable need for the collective control alternative provide by social norms. That does not yet exist in the Notice and Choice literature. Notice and Choice Must Go: The Collective Control Alternative remedies that lack.
- by Richard Warner
- •
AI-driven decisions can draw data from virtually any area of your life to make a decision about virtually any other area of your life. That creates fairness issues. Effective regulation to ensure fairness requires that AI systems be... more
AI-driven decisions can draw data from virtually any area of your life to make a decision about virtually any other area of your life. That creates fairness issues. Effective regulation to ensure fairness requires that AI systems be transparent. That is, regulators must have sufficient access to the factors that explain and justify the decisions. One approach transparency is to require that systems be explainable, as that concept is understood in computer science. An system is explainable if one can provide a human-understandable explanation of why it makes any particular prediction. Explainability should not be equated with transparency. Instead, we define transparency for a regulatory purpose. A system is transparent for a regulatory purpose (r-transparent) when and only when regulators have an explanation, adequate for that purpose, of why it yields the predictions it does. Explainability remains relevant to transparency but turns out to be neither necessary nor sufficient for it. The concepts of explainability and r-transparency combine to yield four possibilities: explainable and either r-transparent or not; and not explainable and either not r-transparent or r-transparent. Combining r-transparency with ideas from the Harvard computer scientist Cynthia Dwork, we propose for requirements on AI systems.
- by Richard Warner
- •
In Jorge Luis Borges’s short story “The Lottery in Babylon,” Babylonians submit every sixty days to a lottery that allocates rewards, requirements, and punishments, including changes in political and socio-economic positions. Various... more
In Jorge Luis Borges’s short story “The Lottery in Babylon,” Babylonians submit every sixty days to a lottery that allocates rewards, requirements, and punishments, including changes in political and socio-economic positions. Various aspects of their lives affect the lottery’s outcomes, but they do not know which aspects those are. The result is a significant loss of freedom to pursue long-term plans. One cannot count on completing one’s education or pursuing a career as a lawyer, doctor, or professional chess player, for example. AI-driven decisions involve a similar—if less drastic—loss of freedom. Artificial intelligence is applied to almost any aspect of life that involves decision making, and, like the lottery, it interferes with freedom by disrupting the pursuit of long-term plans. The lottery interferes with the pursuit of long-term plans because it has this unavoidable consequence: in unknowable ways, an event in virtually any area of your life may figure in lottery outcome affecting virtually any other area. AI-driven decisions are similar: in unavoidable and difficult to know ways, an event in virtually any area may affect a decision about virtually any other area. This is a consequence of the extensive use of proxy variables (proxies). A proxy variable is a stand-in for something you want to predict but is either impossible or too difficult to directly measure. The credit rating company Lenddo, for example, uses how often people drain their cell phone battery as a proxy to predict how likely one is to default on a debt.
How do we regulate AI to preserve freedom? The article characterizes the relevant notion of acting freely and explains how AI interferes with free action so understood. To maximize freedom, AI-driven decisions should—ideally--meet the following Knowledge Condition: The decisions should be ones that those subject to them would know to be best justified after unimpaired reasoning under ideal conditions of adequate time and information. The Knowledge Condition will not be fulfilled across the board in contemporary societies characterized by conflicting moral social, and political views. It can at best be approximated, and the section explains what counts as approximation. Significantly society-wide approximation is unlikely unless people share sufficiently similar (not necessarily the same) standards of justification. In contemporary highly fractionated societies, that would require significant social change.
How do we regulate AI to preserve freedom? The article characterizes the relevant notion of acting freely and explains how AI interferes with free action so understood. To maximize freedom, AI-driven decisions should—ideally--meet the following Knowledge Condition: The decisions should be ones that those subject to them would know to be best justified after unimpaired reasoning under ideal conditions of adequate time and information. The Knowledge Condition will not be fulfilled across the board in contemporary societies characterized by conflicting moral social, and political views. It can at best be approximated, and the section explains what counts as approximation. Significantly society-wide approximation is unlikely unless people share sufficiently similar (not necessarily the same) standards of justification. In contemporary highly fractionated societies, that would require significant social change.
- by Richard Warner
- •
technology now give others considerable power to determine when personal information is collected, how it is used, and to whom it is distributed. Privacy advocates sound the alarm in regard to both the governmental and private sectors. 7... more
technology now give others considerable power to determine when personal information is collected, how it is used, and to whom it is distributed. Privacy advocates sound the alarm in regard to both the governmental and private sectors. 7 I focus entirely on the latter and, within that, exclusively on commercial interactions. Private sector commercial transactions merit separate consideration. Not only do they raise complex and important issues, they also have not been as extensively examined as governmental intrusions. 8 Privacy advocates raise a diverse array of concerns: "[T]heorists have proclaimed the value of privacy to be protecting intimacy, friendship, dignity, individuality, human relationships, autonomy, freedom, self-development, creativity, independence, imagination, counterculture, eccentricity, freedom of thought, democracy, reputation, and psychological well-being." 9 The diversity of concerns reflects the remarkably broad effect of the power others now have over one's personal information. One important reason the effects are so far reaching is that information-processing practices now share a distinctive and sociologically crucial quality: they not only collect and record details of personal information; they also are organized to provide bases for action toward the people concerned. Systematically harvested personal information, in other words, furnishes bases for institutions to determine what treatment to mete out to each individual... . Mass surveillance is a distinctive and consequential feature of our times. Whether information held by different merchants, insurers, and government agencies can readily be pooled, opening the way to assembling all the recorded information concerning an individual in a single digital file that can easily be retrieved and searched. It should soon be possible-maybe it is already possible-to create comprehensive electronic dossiers for all Americans, similar to the sort of dossier the FBI compiles when it conducts background investigations of applicants for sensitive government employment or investigates criminal suspects. The difference is that the digitized dossier that I am imagining would be continuously updated.
We present the curriculum, pilot offering, and initial evaluation of a CS + Law based CS 1 course that was team taught by a Computer Science professor and a law school professor. Relevant legal topics were interwoven through the course.... more
We present the curriculum, pilot offering, and initial evaluation of a CS + Law based CS 1 course that was team taught by a Computer Science professor and a law school professor. Relevant legal topics were interwoven through the course. The results from this initial offering suggest that this sort of highly interdisciplinary offering can be successful both in computing education and in making students realize the relevance of Computer Science to the broader world beyond IT.
- by Richard Warner
- •
This Article owes a great deal to my colleague, Richard Wright, with whom I have for years discussed these issues. 1. The "in principle" is essential; there is no claim that you can immediately produce the answer when asked, or that you... more
This Article owes a great deal to my colleague, Richard Wright, with whom I have for years discussed these issues. 1. The "in principle" is essential; there is no claim that you can immediately produce the answer when asked, or that you thought of it before you acted. The claim is that you could answer after sufficient unimpaired reflection.