Academia.eduAcademia.edu

The Proxy Problem: Fairness and Artificial Intelligence

Abstract

Developers of predictive systems use proxies when they cannot directly observe attributes relevant to predictions they would like to make. Proxies have always been used, but today the use of proxies has the consequence that one area of one’s life can have significant consequences for another seemingly disconnected area, and that raises concerns about fairness and freedom, as the following example illustrates. Sally defaults on a $50,000 credit card debt and declares bankruptcy. The debt was the result of paying for lifesaving treatment for her daughter, and despite her best efforts, she could not afford even the minimum credit card payments. A credit scoring system predicts that Sally is a poor risk even though post-bankruptcy Sally is a good risk—her daughter having recovered. Sally’s car insurance company uses credit ratings as proxy for safe driving (as many US insurance companies in fact do). Is it fair that Sally’s life-saving effort forces her down a disadvantageous path? Our starting point for addressing fairness is the economist John Roemer’s observation in Equality of Opportunity that a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Does the insurance company unfairly tilt the playing field against Sally when it uses her credit score to set her insurance premium? More generally, as the Sally example illustrates, one factor that affects level-playing-field fairness is the social structure of information processing itself. The use of proxies can profoundly alter the social structure of information processing. When does their use do so unfairly? To address that question, we adapt an approach suggested in an influential article by the computer scientist Cynthia Dwork (who cites Roemer as a source of her approach). Computer science has recently seen an explosion of articles about AI and fairness, and one of our goals is to bring those discussions more centrally into the discussion of legal scholars. It may seem we have chosen badly, however. One criticism of Dwork et al. is that the fairness criterion she offers is of little practical value since it requires determining relevant differences among people in ways that are—or at least appear to be—highly problematic in in real-life cases. We defend the approach against this criticism by showing how a regulatory process addressing the use of proxies in AI could make reasonable determinations of relevant differences among individuals and assign an important to the Dwork et al. criterion of fairness. The regulatory process would promote level-playing-field fairness even in the face of proxy-driven AI.