Articles in peer reviewed international journals by Eva Erman
Philosophy & Technology, 2025

Current History, 2025
*** At the heart of the current AI boom is the steadily repeating mantra that we live in extraord... more *** At the heart of the current AI boom is the steadily repeating mantra that we live in extraordinary times. Depending on who you ask, we seem to be just a few years away from unleashing AI technologies that will boost overall productivity, solve medical enigmas, turn politics on its head, or dispose of humankind. This sentiment was recently expressed by AI's poster boy Sam Altman, CEO of OpenAI, when he argued in The Washington Post with characteristic gravity that we currently "…face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology's bene@its and opens access to it, or an authoritarian one, in which nations or movements that don't share our values use AI to cement and expand their power? There is no third option-and it's time to decide which path to take." Keeping regimes like Russia and China at bay, Altman explained, requires the U.S. to invest signiMicantly in both digital infrastructure and human capital, as well as to help set up global AI institutions akin to the International Atomic Energy Agency (IAEA) or the Internet Corporation for Assigned Names and Numbers (ICANN). Importantly, this would not only be beneMicial for the U.S. economy, but also create "a world shaped by a democratic vision for AI". The right kind of AI strategy would thus not only help "democratic AI" win over "authoritarian AI", but also help create a more democratic world. Altman is entirely right to conceptualize the governance of AI technology as a multi-level issue, and two key reasons explain why we should not underestimate the importance of global regulatory initiatives in particular. The Mirst is that the AI industry is a truly global phenomenon, in the sense that it is driven by large multinational companies like Microsoft, Google, and Meta, who recruit talent from all over the world, train AI

Res Publica, 2024
According to normative behaviourism, political theorists should ground their principles in behavi... more According to normative behaviourism, political theorists should ground their principles in behaviour rather than in thoughts as is done in mainstream political theory. Focusing on 'real actions' of 'real people', normative behaviourism turns facts about observable patterns of behaviour into grounds for specific normative political principles. For this reason, this way of doing normative political theory has strong realist credentials, given its methods, values and ambitions. In fact, Jonathan Floyd argues that it is an improvement of political realism since it solves two problems that allegedly face many realists, namely, the legitimacy problem, i.e., how we should distinguish genuine acceptance of a political system from false acceptance, and the institutional problem, i.e., how we should translate political principles into viable political institutions. In this paper, we make two claims. First, normative behaviourism does not solve the legitimacy problem encountered by realists, because its solution rest on a flawed distinction between foundational principles and 'principles that matter', together with a problematic use of a Humean internal reasons approach. Second, normative behaviourism does not solve the institutional problem encountered by realists, because its solution is in fact much more unfeasible than realist prescriptions, since feasibility is interpreted as mere possibility. We wind up our analysis by showing that normative behaviourism encounters new problems that realist approaches typically do not face. First, normative behaviourism is a kind of closet utilitarianism, but with a more problematic value measure, which rests on universal principles of the kind that realists usually reject. Second, by arguing for democracy and equality, normative behaviourism runs the risk of coming too demanding and unattractive for realists, who carefully separate democracy from political legitimacy, both conceptually and normatively, and argue that the latter does not require the former. We conclude that despite several aspects superficially attractive to the realist project, normative behaviourism fails in its attempt to supply an improved version of realism. The paper is structured as follows. In the first section, we explain the main features of normative behaviourism and what makes it a member of the 'realist family', according to its proponents (I). The second section refutes normative behaviourism's alleged solution to the legitimacy problem (II), while the third section does the same with regard to the institutional problem (III). In the fourth section, we wind up by bringing up two new problems that normative behaviourism encounters, that typically do not emerge for realists (IV). intolerable (Floyd 2017: 168-169). Consequently, political systems causing fewer instances of such behaviour are more defensible than those leading to more, and certainly, the system resulting in the minimum of such behaviour is, without doubt, more justifiable than any other (Floyd 2017: 169). Due to its reliance on patterns of thoughts, mainstream political theory is destined to fail, according to Floyd, because reasonable people diverge on the fundamentals of political normativity (Floyd 2017: 121). By contrast, normative behaviourism's reliance on patterns of behaviour is more promising, since they converge and are stable over time. By examining our reactions to changes in our political surroundings, we can discern which systems we are less inclined towards, which ones we favour more, and perhaps even identify the one that suits us most effectively. From this, we may conclude that there exists at least one method to justify political principles, namely, the principles embodied by the most fitting system, that is, the system with least insurrection and crime (Floyd 2017: 167). Normative behaviourism shares many features with realism and is therefore described as a member of the 'realist family'. The starting-point for both approaches is the idea that we can learn about what we ought to do now and in the future by looking at the past, such as what political systems have had the least amount of insurrection and crime (normative behaviourism) and what has historically and over time been the main stabilizing features of politics (realism). Both approaches start out from the idea that political theories should ground their political principles in 'real actions' of 'real people', thereby justifying principles with actions, rather than ground their political principles in 'hypothetical actions' of 'imaginary people', justifying actions with principles, as in mainstream political theory (Floyd 2022: 2). Hence, normative behaviourism shares Geuss' famous realist point of departure, according to which political theory must start from and be concerned with the way the political, economic, and social institutions actually operate in a society at a given time, rather than with how people ideally or rationally ought to be or act (Geuss 2008). In a similar fashion, it "begins with what

Poliitical Studies Review, 2024
Forthcoming in Political Studies Review *** One recent debate in political philosophy centers on ... more Forthcoming in Political Studies Review *** One recent debate in political philosophy centers on the question of whether there is a distinctively political normativity. Two main positions have emerged. The Cirst conception argues that there is no distinctively political normativity in a strict sense, as political decisions and actions are ultimately evaluated on the basis of their conformity with more general moral norms, such as justice, fairness, equality and democracy. The second conception, a non-moral stance, maintains that there is a distinctive set of norms that applies speciCically to political actions and decisions, which are not grounded in moral normativity. Instead, these norms are shaped by the nature of political institutions, processes, and practices. 1 Advocates of the non-moral view argue that political norms are necessary to ensure the legitimacy of political institutions, and that these norms cannot be captured by moral principles. However, critics of this view contend that political norms are ultimately a subset of more general moral norms and that there is no need to posit a distinctively political normativity dichotomous to moral normativity. 2 1 For realists advocating a distinctively political normativity in terms of a non-moral kind of

Topoi: An International Review of Philosophy, 2024
In the recent debate on political normativity in political philosophy, two positions have emerged... more In the recent debate on political normativity in political philosophy, two positions have emerged among political realists. According to the Airst view, political normativity is understood as orthogonal to moral normativity, and moral considerations do not Aigure in the reasons given in support of a political principle or a course of action in the political domain. Instead, theorists in this camp have been drawing on instrumental, functional or epistemic normativity in theorizing political normativity. According to the second view, moral norms and political norms are not dichotomous in this sense, as moral considerations may Aigure in the justiAication of a political principle or theory. The distinctness rather has to do with how moral norms and prescriptions are 'Ailtered through' the realities of politics such that they are altered by politics' constitutive features
Cooperation & Conflict, 2024

Nature Machine Intelligence, 2024
Can non-state actors like multinational tech companies counteract the potential democratic defici... more Can non-state actors like multinational tech companies counteract the potential democratic deficit in the emerging global governance of AI? We argue that, while they may strengthen core values of democracy such as accountability and transparency, they currently lack the right kind of authority democratize global AI governance. *** After a period of intense fascination with Artificial Intelligence (AI) applications, including Large Language Models (LLMs) such as ChatGPT, the public discussion is quickly turning toward the issue of the social, political, and ethical impact of these technologies. Multiple regulation and governance initiatives are under way at the national and regional levels. However, since cutting-edge AI development often takes place in multinational companies or international research labs, and AI technology creates cross-border externalities, an additional level of transboundary regulation and cooperation is

International Studies Review, 2023
Artificial intelligence (AI) represents a technological upheaval with the potential to transform ... more Artificial intelligence (AI) represents a technological upheaval with the potential to transform human society. It is increasingly viewed by states, non-state actors, and international organizations (IOs) as an area of strategic importance, economic competition, and risk management. While AI development is concentrated to a handful of corporations in the US, China, and Europe, the long-term consequences of AI implementation will be global. And while the technology is still only lightly regulated, state and non-state actors are beginning to negotiate global rules and norms to harness and spread AI's benefits while limiting its negative consequences. For example, in 2021, the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted recommendations on the ethics of AI, the Council of Europe laid the groundwork for the world's first legally binding AI treaty, and the European Union (EU) launched negotiations on comprehensive AI legislation. Our purpose in this article is to outline an agenda for research into the global governance of AI. 1 Advancing research on the global regulation of AI is imperative. The rules and arrangements that are currently being developed to regulate AI will have considerable impact on power differentials, the distribution of economic value, and the political legitimacy of AI governance for years to come. Yet there is currently little systematic knowledge on the nature of global AI regulation, the interests influential in this process, and the extent to which emerging arrangements can manage AI's consequences in a just and democratic manner. While poised for rapid expansion, research on the global governance of AI remains in its early stages (but see Maas 2021; Schmitt 2021). This article complements earlier calls for research on AI governance in general (Dafoe 2018; Butcher and Beridze 2019; Taeihagh 2021) by focusing specifically on the need for systematic research into the global governance of AI. It submits that global efforts to regulate

Political Studies Review, 2023
Forthcoming in Political Studies Review *** Many debates in political philosophy over the last de... more Forthcoming in Political Studies Review *** Many debates in political philosophy over the last decade have focused intensively on methodological issues, such as the debate on ideal theory vs. non-ideal theory, political moralism vs. political realism, and practice-independence vs. practice-dependence. Recently, Jonathan Floyd has brought up methodological aspects related to theories 'grounded in thoughts' vs. theories 'grounded in behaviour'. It is argued that so-called 'normative behaviourism' offers a better methodology than mainstream so-called 'mentalism'. In Floyd's view, normative behaviourism is a "new way of doing political philosophy" (Floyd 2017: 181). 1 Our claim in this paper is that normative behaviourism does not offer an alternative methodology in political theory in the sense envisioned by Floyd. First, we show that normative behaviourism is as dependent on 'normative thoughts' as mainstream political theory and is therefore also 'mentalist'. Second, we illustrate the structural similarities between normative behaviourism and mainstream political theory from a methodological standpoint by comparing the former to an influential normative theory, namely, utilitarianism.

Journal of Philosophical Research, 2023
In broad strokes, political realists reject what has become called the 'ethics first' view or 'po... more In broad strokes, political realists reject what has become called the 'ethics first' view or 'political moralism' in political philosophy (Williams 2005; Geuss 2008), which typically entails a rejection of the idea of deriving political recommendations from prepolitical ideals of, say, justice, happiness, or equality (the 'enactment model' in Williams' terminology), and the idea of identifying "the limits of permissible political conduct through pre-political moral commitments" (the 'structural model' in Williams' terminology (Rossi and Sleat 2014: 689; Williams 2005). Instead of beginning with a moral ideal of some kind, realists insist that political theory should begin with an understanding of the practice of politics itself (Rossi and Sleat 2014: 690). In recent years, some realists have started to analyze this in terms of the distinctiveness of political norms vis-à -vis moral norms (Rossi and Sleat 2014; Jubb and Rossi 2015a, 2015b; Erman and Möller 2015a, 2015b), which in turn has led to an interesting discussion on the sources of normativity in political theory. In particular, some realists have argued that there is a 'distinctively political normativity' which should be used when construing and justifying political theories. It is assumed that acknowledging such a distinctively political normativity has severe consequences for both how to do political theory and for which principles and values are justified (Jubb 2019). Among realists focusing on a distinctively political normativity, one can identify two approaches (Erman and Möller 2022). On the 'moral view', it is explicitly acknowledged that moral norms might have a role to play for political normativity (Jubb 2019; Sleat 2021). On the 'non-moral view', inspired by Raymond Geuss, distinctively political normativity is understood in terms of a non-moral kind of

Political Studies, 2024
The study of the social and ethical impact of artificial intelligence (AI) is still in its infanc... more The study of the social and ethical impact of artificial intelligence (AI) is still in its infancy and contributions to the field endeavor to keep up with the continuous developments of the booming AI industry. It is hence not surprising that key concepts like 'AI ethics' and 'AI governance' are still rather vague and undertheorized. We suggest that the former can be defined as the field of applied ethics that is concerned with the ethical questions that arise in light of actual and conceivable AI systems. The study of 'AI governance', then, could be seen as a subdomain of 'AI ethics', guided by the assumption that we can collectively influence the development of AI. In the literature, 'AI governance' often simply refers to the mechanisms and structures needed to avoid 'bad' outcomes and achieve 'good' outcomes with regards to the problems and issues already identified and formulated within AI ethics. In this paper, we argue that although this outcome-focused view captures one important aspect of what 'good AI governance' requires, its emphasis on the effects of governance mechanisms runs the risk of overlooking important procedural aspects of good AI governance. One of the most important properties of good governance is political legitimacy, and we will argue that, for AI governance to be politically legitimate it matters not only what it achieves but also how it is structured. Under the assumption that such governance must be global in scope, this paper has a twofold aim: (a) to develop a theoretical framework for theorizing the political legitimacy of global AI governance and (b) to demonstrate how it can be used as a compass for critically assessing the (lack of) legitimacy of actual instances of AI governance. Rather than defending a substantive firstorder theory of global political legitimacy, our ambition is to spell out and defend some basic normative conditions that any satisfactory account of the political legitimacy of AI governance must respect. 1 Our basic presumption is that, whatever else global political legitimacy requires, it must at least be minimally democratic. The main aims of the paper are pursued by asking what this 1 Indeed, it is impossible to fully separate the 'metanormative' and 'metatheoretical' level, focusing on boundary conditions, from the substantial level, focusing on first-order theory,. Still, normative boundary conditions should not be understood as void of normativity, since they are premised on certain conceptual and normative assumptions about democracy and global political legitimacy. However, they are too thin to constitute a substantial normative theory in themselves. Instead, they are better seen as normative considerations that a reasonable account of the global political legitimacy of AI governance should respect (given certain assumptions). What it would mean to respect these conditions will depend on the specific substantial theory of legitimate AI governance that is defended, and what it aims to achieve. entails, in light of a distinction between 'governance by AI' and 'governance of AI' and in relation to different kinds of authority and different kinds of decision-making, either employing AI decision-making or applying decision-making to AI development and deployment. Drawing on insights from political theorizing around global governance more generally, we argue that AI governance must take procedural aspects, as well as outcomes, into account. Insofar as we accept that political legitimacy at least must be minimally democratic, an account of the legitimacy of global AI governance must respect that the governance of AI and governance by AI have a specific normative relationship and raise different normative demands. This suggests, among other things, that political legitimacy would be reduced if we decided to outsource certain kinds of decisionsmaking to AI systems, and that many of the initiatives to govern AI globally currently coming out of private, non-state actors lack political legitimacy. The structure of the paper is straightforward. In the first section, we give a brief overview of how the concept of 'governance' is applied in the literature on AI governance, to illustrate the predominant outcome-focused view and the worries that it raises (I). Thereafter, we focus on the first part of our twofold aim, developing the theoretical framework constituted by some basic normative boundary conditions shaped in light of the distinction between 'governance by AI' and 'governance of AI' (II). The third section focuses on the second part of the twofold aim, applying this theoretical framework to current AI governance to illuminate how it can be used as a critical compass for assessing the (lack of) legitimacy of actual instances of AI governance (III). The final section concludes and addresses the ways in which the proposed approach may respond to the worries raised by the outcome-focused view of AI governance (IV). The concept of 'governance' in discussions around AI is currently a term much too broad for its own good. It is used to refer to everything from the plethora of 'ethics guidelines' for AI development written by state-and non-state actors (Jobin et al 2019), to the presence of human oversight in automated processes (AIHLEG: 16), and hypothetical international laws for preventing undesirable 'race dynamics' among superpowers developing AI (Dafoe 2018: 43-47). This imprecision is not surprising. First, there are a number of ways of defining Artificial intelligence, and the scope of what counts as AI governance hence depends on how AI is delineated, to begin with. 2 Second, both AI and its regulation are rapidly changing phenomena, and 2 The general-level analysis we offer here is not tied to a specific definition of AI or the details of the current state-ofthe-art. Given that we cannot know exactly what kinds of AI systems will be widely diffused and have widespread effects on society, we believe it is sufficient and indeed wiser to rely on the common view that machines are intelligent when they can perform a task that would require intelligence if done by humans (McCarthy et al. 1955, cf. Russell and Norvig 2020). This admittedly vague definition captures cases when AI is used to assist or automate decisionmaking, but also many other kinds of applications.

Philosophy & Social Criticism, 2022
In the last couple of years, increased attention has been directed at the question of whether the... more In the last couple of years, increased attention has been directed at the question of whether there is such a thing as a distinctively political normativity. With few exceptions, this question has so far only been explored by political realists. However, the discussion about a distinctively political normativity raises methodological and meta-theoretical questions of general importance for political theory. Although the terminology varies, it is a widely distributed phenomenon within political theory to rely on a normative source which is said to be political rather than moral, or at least foremost political. In light of this concern, the present paper moves beyond political realism in the attempt to explore alternative ways of understanding distinctively political normativity, in a way which may be useful for political theorists. More specifically, we investigate two candidate views, here labelled the 'domain view' and the 'role view', respectively. The former traces distinctness to the 'domain', i.e., to the circumstances of politics. This view has gained a lot of support in the literature in recent years. The latter traces distinctness to 'role', i.e., the role-specific demands that normative-political principles make. Our twofold claim in this paper is that the domain view is problematic but that the role view is promising.
Ethics & International Affairs, 2022

Philosophy Compass, 2022
Recent years' literature on distinctively political normativity raises methodological and meta-th... more Recent years' literature on distinctively political normativity raises methodological and meta-theoretical concerns of importance for political theory. The aim of this article is to identify and critically examine the main positions in this debate as well as to analyze problems and promising ways forward. In brief, we argue that the predominant "non-moral view" of distinctively political normativity (i.e., the view that political normativity is independent of moral normativity), is problematic in all its three versions. Further, we suggest that a reasonable approach to political normativity should adopt a "moral view" (i.e., the view that political normativity is not independent of moral normativity) and investigate two such approaches: the so-called "filter approach" and the "role approach." Although still much in need of further development in political theory, both of them bear promise as accounts which preserve the distinctness of the political domain while acknowledging its status as a moral kind.

Moral Philosophy & Politics, 2022
The creation of increasingly complex Artificial Intelligence (AI) systems raises urgent questions... more The creation of increasingly complex Artificial Intelligence (AI) systems raises urgent questions about their ethical and social impact on society. Since this impact ultimately depends on political decisions about normative issues, political philosophers can make valuable contributions by addressing such questions. Currently, AI development and application are to a large extent regulated through non-binding ethics guidelines penned by transnational entities. Assuming that the global governance of AI should be at least minimally democratic and fair, this paper sets out three desiderata that an account should satisfy when theorizing about what this means. We argue, first, that the analysis of democratic values, political entities, and decision-making should be done in a holistic way; second, that fairness is not only about how AI systems treat individuals, but also about how the benefits and burdens of transformative AI are distributed; and finally, that justice requires that governance mechanisms are not limited to AI technology, but are incorporated into a range of basic institutions. Thus, rather than offering a substantial theory of democratic and fair AI governance, our contribution is metatheoretical: we propose a theoretical framework which sets up certain normative boundary conditions for a satisfactory account.
Contemporary Political Theory, 2021

Ethical Theory & Moral Practice, 2021
Political realists' rejection of the so-called 'ethics first' approach of political moralists (ma... more Political realists' rejection of the so-called 'ethics first' approach of political moralists (mainstream liberals), has raised concerns about their own source of normativity. Some realists have responded to such concerns by theorizing a distinctively political normativity. According to this view, politics is seen as an autonomous, independent domain with its own evaluative standards. Therefore, it is in this source, rather than in some moral values 'outside' of this domain, that normative justification should be sought when theorizing justice, democracy, political legitimacy, and the like. For realists the question about a distinctively political normativity is important, because they take the fact that politics is a distinct affair to have severe consequences for both how to approach the subject matter as such and for which principles and values can be justified. Still, realists have had a hard time clarifying what this distinctively political normativity consists of and why, more precisely, it matters. The aim of this paper is to take some further steps in answering these questions. We argue that realists have the choice of committing themselves to one of two coherent notions of distinctively political normativity: one that is independent of moral values, where political normativity is taken to be a kind of instrumental normativity; another where the distinctness still retains a justificatory dependence on moral values. We argue that the former notion is unattractive since the costs of commitment will be too high (first claim), and that the latter notion is sound but redundant since no moralist would ever reject it (second claim). Furthermore, we end the paper by discussing what we see as the most fruitful way of approaching political and moral normativity in political theory.

The Journal of Politics, 2022
The debate between ideal and non-ideal theory in political philosophy has been going on for quite... more The debate between ideal and non-ideal theory in political philosophy has been going on for quite some time now, dealing with questions about the proper role and use of normative political theories and the relevance of action-guidance. The terms 'ideal ' and 'non-ideal' have been used in a variety of ways, which have corresponded to different questions in political theory. Theories labelled 'ideal' have been taking the form of, for example, endstate theory, full compliance theory and utopian theory, whereas theories labelled 'nonideal' have taken the form of transitional theory, partial compliance theory and realist theory (Valentini 2012). At least since the Amartya Sen's influential article from 2006and presumably long before that -it is widely agreed that if the aim of theorizing is to make the world more just, it is not necessary to have a theory of perfect justice at our disposal.
British Journal of Political Science, 2020
Uploads
Articles in peer reviewed international journals by Eva Erman