Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
10 pages
1 file
This chapter makes the case for strong longtermism: the claim that, in many situations, impact on the long-run future is the most important feature of our actions. Our case begins with the observation that an astronomical number of people could exist in the aeons to come. Even on conservative estimates, the expected future population is enormous. We then add a moral claim: all the consequences of our actions matter. In particular, the moral importance of what happens does not depend on when it happens. That pushes us toward strong longtermism. We then address a few potential concerns, the first of which is that it is impossible to have any sufficiently predictable influence on the course of the long-run future. We argue that this is not true. Some actions can reasonably be expected to improve humanity’s long-term prospects. These include reducing the risk of human extinction, preventing climate change, guiding the development of artificial intelligence, and investing funds for later...
Ethics in Progress, 2023
Moral circle expansion has been occurring faster than ever before in the last forty years, with moral agency fully extended to all humans regardless of their ethnicity, and regardless of their geographical location, as well as to animals, plants, ecosystems and even artificial intelligence. This process has made even more headway in recent years with the establishment of moral obligations towards future generations. Responsible for this development is the moral theory – and its associated movement – of longtermism, the bible of which is What We Owe the Future (London: Oneworld, 2022) by William MacAskill, whose book Doing Good Better (London: Guardian Faber, 2015) set the cornerstone of the effective altruist movement of which longtermism forms a part. With its novelty comes great excitement, but longtermism and the arguments on its behalf are not yet well thought out, suffering from various problems and entailing various uncomfortable positions on population axiology and the philosophy of history. This essay advances a number of novel criticisms of longtermism; its aim is to identify further avenues for research required by longtermists, and to establish a standard for the future development of the movement if it is to ever be widely considered as sound. Some of the issues raised here are about the arguments for the moral value of the future; the quantification of that value with the longtermist ethical calculus – or the conjunction of expected value theory with the ‘significance, persistence, contingency’ (SPC) framework; the moral value of making happy people; and our ability to affect the future and the fragility of history. Perhaps the most significant finding of this study is that longtermism currently constitutes a shorterm view on the longterm future, and that a properly longterm view reduces to absurdity.
Longtermism holds that what we ought to do is mainly determined by effects on the far future. A natural objection is that these effects may be nearly impossible to predict -- perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection. To that end, I develop two simple models for comparing "longtermist" and "short-termist" interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don't mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these "Pascalian" probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.
Philosophy in the Contemporary World, 2010
According to many scientists and futurists, technological advancements may soon make it possible significantly to extend average human life expectancy. This is often called "superlongevity." I discuss two arguments against superlongevity-first, a utilitarian argument from Peter Singer, and then an argument of my own. Although neither argument is decisive, I conclude that there are serious concerns about whether superlongevity would be a good idea that we need to reflect on as we consider the possibility. 0
Let strong longtermism be the thesis that in a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best. If this thesis is correct, it suggests that for decision purposes, we can often simply ignore shorter-run effects: the primary determinant of how good an option is (ex ante) is how good its effects on the very long run are. This paper sets out an argument for strong longtermism. We argue that the case for this thesis is quite robust to plausible variations in various normative assumptions, including relating to population ethics, interpersonal aggregation and decision theory. We also suggest that while strong longtermism as defined above is a purely axiological thesis, a corresponding deontic thesis plausibly follows, even by non-consequentialist lights.
What moral reasons, if any, do we have to ensure the long-term survival of humanity? This article contrastively explores two answers to this question: according to the first, we should ensure the survival of humanity because we have reason to maximize the number of happy lives that are ever lived, all else equal. According to the second, seeking to sustain humanity into the future is the appropriate response to the final value of humanity itself. Along the way, the article discusses various issues in population axiology, particularly the so-called Intuition of Neutrality and John Broome's " greediness objection " to this intuition.
Journal of Medical Ethics, 2024
Philosophers, psychologists, politicians, and even some tech billionaires have sounded the alarm about artificial intelligence (AI) and the dangers it may pose to the long-term future of humanity. Some believe it poses an existential risk (X-Risk) to our species, potentially causing our extinction or bringing about the collapse of human civilization as we know it. The above quote from philosopher Will MacAskill captures the key tenets of “longtermism,” an ethical standpoint that places the onus on current generations to prevent AI- related—and other—X-Risks for the sake of people living in the future. Developing from an adjacent social movement commonly associated with utilitarian philosophy, “effective altruism,” longtermism has amassed following of its own. Its supporters argue that preventing X-Risks is at least as morally significant as addressing current challenges like global poverty. However, critics are concerned that such a distant-future focus will sideline efforts to tackle the many pressing moral issues facing humanity now. Indeed, according to “strong” longtermism, future needs arguably should take precedence over present ones. In essence, the claim is that there is greater expected utility to allocating available resources to prevent human extinction in the future than there is to focusing on present lives, since doing so stands to benefit the incalculably large number of people in later generations who will far outweigh existing populations. Taken to the extreme, this view suggests it would be morally permissible, or even required, to actively neglect, harm, or destroy large swathes of humanity as it exists today if this would benefit or enable the existence of a sufficiently large number of future—that is, hypothetical or potential—people, a conclusion that strikes many critics as dangerous and absurd.
Sufficiently large catastrophes can affect human civilization into the far future: thousands, millions, or billions of years from now, or even longer. The far future argument says that people should confront catastrophic threats to humanity in order to improve the far future trajectory of human civilization. However, many people are not motivated to help the far future. They are concerned only with the near future, or only with themselves and their communities. This paper assesses the extent to which practical actions to confront catastrophic threats require support for the far future argument and proposes two alternative means of motivating actions. First, many catastrophes could occur in the near future; actions to confront them have near-future benefits. Second, many actions have co-benefits unrelated to catastrophes, and can be mainstreamed into established activities. Most actions, covering most of the total threat, can be motivated with one or both of these alternatives. However, some catastrophe-confronting actions can only be justified with reference to the far future. Attention to the far future can also sometimes inspire additional action. Confronting catastrophic threats best succeeds when it considers the specific practical actions to confront the threats and the various motivations people may have to take these actions.
2024
The increasingly influential longtermism philosophy advocates prioritizing resource allocations to relatively larger future populations to maximize utilitarian gains. Yet, emerging research seeking to promote long-term intergenerational concern has overlooked a key impediment which likely constrains intergenerational attitudes: human moral psychology. We investigate moral judgments and social consequences of altruistic tradeoffs in generational closeness for gains in welfare, finding the ethical principles of longtermism are misaligned with folk moral intuitions. Four US Prolific samples (N=2,306) reveal that altruists who prioritize future welfare over generational closeness are perceived as less moral and face social repercussions in both attitudes and behavior. Nonetheless, strong intergenerational concern can mitigate these negative judgments. As society grapples with near-term and long-term challenges, these insights help reconcile prevailing debates in psychological, philosophical, and public discourse, emphasizing key features decision-makers and policymakers should consider when weighing the immediate and future needs of society.
The ethical challenges posed by the advances in the understanding of ageing are the subject of very welcome controversy across political and social, economic and environmental dimensions. But the challenges are not just a matter of how our societies would manage the material consequences of such dramatic changes. They also challenge our conception of the shape of our lives and for what it means for our lives to flourish. How do we reconcile a radical conceptual change in our thinking about ageing-that many would have us enthusiastically embrace-with what may be a deeper truth: that it is not merely our bodies, but we who age? And that perhaps to be particular human beings whose lives have value-worth engaging in scientific research to cherish and preserve-we need to age? Moreover, like many ethical controversies produced by scientific and technological change, these issues are liable to produce a profoundly uncomfortable sense of moral vertigo. Most of our ordinary ethical decisions take place on relatively stable ground in which moral concepts have relatively firm connections with facts of nature and of the human condition. Profound scientific changes of the kind associated with aspects of anti-ageing research, precisely because it challenges our view of what the facts are, and promise or threaten to revise the human condition, shift the entire pattern of connections between moral and everyday concepts. This poses a problem not just for what we are to think but about the means by which we think about these matters. Default ethical thinking about anti-ageing is most often couched in Utilitarian terms. Utilitarian moral thinking-which requires us to try to figure out the likely net effect of our decisions on total human welfare-is an essential and useful ingredient in ordinary public policy making. It is a moral methodology perhaps especially appealing to scientists. Given an ethical problem, can't we weigh scenarios, probilify and calculate the optimal ethical result? I think we need to be reminded that here Utilitarian approaches are unlikely to help us face these challenges and not only or primarily because the calculations are hard. Why? First of all, the debates are subject to what we might call the problem of perspective instability. Rational but rival estimations of consequences about anti-ageing begin from different and incommensurable domains: there are the perspectives of the individual and their loved ones, that of society, the economy and culture, and, not uniquely-compare climate change-but still profoundly, that of the species as a whole, both now and into the far future. It is hopeless to pretend, in regard to such profound questions about the human condition, that we have any already available principled way of managing these perspectives which needs merely to be applied smartly and in the light of acquirable evidence. Our moral concepts simply do not form so well-ordered a system and were not built to handle questions which put the very notion of the human into contention. (A sad truth is that some highly developed societies cannot even manage these different perspectives to agree about the economics of health care under present conditions.) Thus, in the case of anti-ageing, the fact that, if asked, an individual may always want to live another day, does not trump the issues of any harm done to humanity as a species by enabling that preference of all current members of humanity to be indefinitely fulfilled. Equally, questions of justice-for example issues of equality of access to the technology or intergenerational equity-do not consistently override individual choice.
In this paper, I will be primarily concerned with moral issues regarding future people and the environment. When it comes to the future, we have deontological and epistemic limitations. The closer to the present, the higher the certainty and the knowledge we have about facts. Thus, when we intend to find moral clarity regarding a future scenario, we deal with an inverse relation between certainty and time (the further to the future, uncertainty gets higher). The main problem is that most ways of dealing with moral issues about future scenarios do not address this relation, and rather focus on things that seem to simplify and clarify the uncertainties of the future. In response to this, I propose a different approach, one that operates neutrally and timelessly dealing with the uncertainties of the future while providing moral groundings that can help to clarify the future's state of moral vagueness. Resumen En este artículo me enfoco primordialmente en problemas morales asociados a las personas futuras y al medio ambiente. Cuando se trata del futuro, tenemos limitaciones tanto deontológicas como epistemológicas. A mayor cercanía con el presente, mayor es la certeza y el conocimiento que tenemos. Por lo tanto, cuando intentamos encontrar claridad moral respecto de escenarios futuros, nos vemos forzados a lidiar con una relación inversa entre certeza y tiempo (a mayor distancia en el futuro, mayor falta de certeza). El gran problema con esto es que la mayoría de las perspectivas para afrontar algunos problemas morales del futuro no reconocen esta relación y, más bien, se enfocan en cosas que simplifiquen y clarifiquen las incertidumbres del futuro. En respuesta a esto, propongo una aproximación diferente, que opera desde la neutralidad y la atemporalidad, lidiando con las incerti-dumbres del futuro y, al mismo tiempo, proveyendo fundamentos morales que ayuden a clarificar el estado moral de vaguedad del futuro. Palabras clave: Personas futuras, ética ambiental, obligaciones morales, problema de la identidad personal, vaguedad moral. MSc in Philosophy, The University of Edinburgh. This article is based on my MSc dissertation funded by CONICYT as part of their programme BECAS CHILE, folio 73170156.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Modes of Future Thought, 2005
Ceccarelli D., Frezza G. (Eds.), Predictability and the Unpredictable. Life, Evolution and Behaviour, 2018
In Chelsea Miya, Oliver Rossier & Geoffrey Rockwell (eds.), Right Research: Modelling Sustainable Research Practices in the Anthropocene, Open Book Publishers. pp. 3-35, 2021
The Threat of Longtermism: Is Ecological Catastrophe an Existential Risk? Disillusioned Ideals for a Bold, New Future, 2023
A Contribution to the Philosophy of Futurology - Part 2, 2021
Proceedings of the Aristotelian Society (Hardback), 2014
Perspectives in Biology and Medicine, 2001
Canadian Journal of Philosophy, 2017
Bioethics, 2006
Journal of Risk and Uncertainty, 2008
Foresight, 2018
Philosophia, 2020