Longtermism and Rational Mistakes

In What We Owe The Future, William MacAskill presents the case for longtermism, the idea that we should place much more moral weight than we do on the very long-term future. It’s an intriguing book, well and clearly written, with interesting things to say about issues as diverse as AI, the value of having children, the gradual decline in happiness inequality, and charter cities. It’s also a consistently even-tempered and fair-minded read. MacAskill is a consequentialist, a believer in the ultimate value of well-being, and a studious analyst of risk and expected value. Yet he’s never dogmatic about any of these things. He’s willing to stand by seemingly outré conclusions generated by those beliefs (like we’re doing wild animals a favor by wiping them out), but he never makes you feel like he wouldn’t listen to a good counter-argument.

What We Owe the Future starts with the moral case for longtermism along with the possibility that what we do can make a real difference for the long term. The second part of the book addresses the benefits and possibility of changing future values as well as the danger of what MacAskill calls lock-in – a time when all values might become permanently locked in. The third section of the book deals with potential sources of huge future harm including biggies that threaten extinction and (much less bad) civilizational collapse. Finally, the book ends with a dive into Parfitian arguments on population ethics, an optimistic vision of the future, and a plea for real personal involvement.

It’s also important to note that WWOTF isn’t targeted to professional philosophers. The thinking is clear and rigorous and borrows from seminal modern philosophers (especially Derek Parfit). But the justification for longtermism fits easily inside a single chapter and isn’t intended to refute any possible counter-argument.

That being said, the argument for longtermism flows quite naturally out of the consequentialist/utilitarian tradition and the population ethics that Parfit first pioneered in Reasons and Persons. If you accept the essential utilitarian idea that our best action is one that maximizes the happiness (or wellbeing to pick a more popular modern formulation) of others, then longtermism flows quite naturally out of a few simple steps. Almost all utilitarian’s agree that while their may be practical reasons why we are more concerned with people we know (partiality) and people we interact with, in principle we have a (moral) reason to be concerned with the well-being of everyone equally regardless of our social proximity. Naturally, people debate the meaning of everyone. Original versions of utilitarian thinking were people-centric, but most modern utilitarian’s tend to extend consideration to a much broader range of the animal kingdom. That extension can include any sentient being and it might argue for equality of consideration or a weighted consideration. MacAskill argues for weighting consideration based on species population multiplied by neuronal count; a thought provoking approach that generates surprisingly intuitive results.

In this moral calculus, distance doesn’t matter nor does personal acquaintance, national affiliation or any other type of direct interest. That being the case, it’s not hard to see why the people who will follow us in time should matter as well. As MacAskill puts it, “Distance in time is like distance in space. People matter even if they live thousands of miles away. Likewise, they matter even if they live thousands of years hence.”

MacAskill argues that longtermism is a natural way to think about decisions. “The idea that future people count is common sense. Future people, after all, are people. They will have hopes and joys and pains and regrets, just like the rest of us. They just don’t exist yet…suppose that, while hiking, I drop a glass bottle on the trail and it shatters. And suppose that if I don’t clean it up, later a child will cut herself badly on the shards. In deciding whether to clean it up, does it matter when the child will cut herself? Should I care whether it’s a week, or a decade, or a century from now? No. Harm is harm, whenever it occurs.”

MacAskill believes that extending our moral concern to the far future will not only give us a better chance to create a good future, it will make us better people: “By abandoning the tyranny of the present over the future, we can act as trustees—helping to create a flourishing world for generations to come.”

He isn’t dogmatic about the consequentialist calculus. He’s willing to admit that “Special relationships and reciprocity are important. But they do not change the upshot of my argument. I’m not claiming that the interests of the present and the future should always and everywhere be given equal weight. I’m just claiming that future people matter significantly.”

Of course, as with many utilitarian calculations, one can afford to discount the greater population precisely because it’s so much greater. Even if an individual future person matters very little, there may well be so many future people that what tiny interest you have in each one eventually swamps every other moral consideration. There’s little doubt that unless we extinct ourselves in the very near future, we can expect humanity to have vastly more future lives than exist at the present. We can also expect that humanity is quite likely to have many more years ahead of it than behind it. MacAskill describes humanity as akin to a teenager: “most of our life is ahead of us, and decisions that impact the rest of that life are of colossal importance.”

There’s a lot of interest in WWOTF and not all of it hinges on longtermism. Issues like AI, pandemics and climate change aren’t so much long-term issues as issues of the here and now. But while there are real learnings to be had from MacAskill about how to productively think about the risks these threats present, the heart of the book’s argument rests on the idea that we have a moral duty to the far future and that we will live our best life by trying to meet that duty. If the moral/rational argument for longtermism doesn’t hold, what’s left is little more than an exploration of the economics of the highly uncertain.

If you find the basic utilitarian position compelling, you’re likely to find the arguments for longtermism compelling as well. Just as it makes little sense to draw utilitarian boundaries at the national level or the species level, it’s hard to see why the current moment should be any more favored. There might still be bones to pick over the nature of well-being and the best interpretation of population ethics, but you’re likely committed to some form of longtermism.

On the other hand, there are strong reasons to doubt the utilitarian logic of decision-making in any context. A central theme of this site (The Work to be Rational) is the impact of transformative experience on decision-theory AND on ethics more broadly. For detailed arguments of why the utilitarian calculus breaks down in the face of transformative experience, read this. For an explanation of why the problem is (surprisingly) even more acute for public versions of utilitarianism, read this.

The short argument is that utilitarian decision-making relies on our ability to generate expected values for future outcomes. That’s just standard doctrine, and MacAskill says as much repeatedly. “In Chapter 2, I introduced the idea that expected value is the right way of evaluating options in the face of uncertainty…In this context, what we need is an account of decision-making when there’s uncertainty about what’s of value….In the case of population ethics, what we should do is figure out what degree of belief we ought to have in each of the different view of population ethics…”

The problem is that when an individual faces a transformative experience (an experience that will change how they think and what they value), there is no way to derive expected values. We don’t know what would feel and we don’t know what would constitute well-being. And if transformative experience makes it impossible for an individual to assign expected values on a personal level, the problem is hugely magnified when we look across the gulfs of time envisioned by the Longtermist.

The case that transformative experience makes standard preference optimization decisions impossible is quite intuitive when it comes to big life decisions. It’s even more intuitive when it comes to longtermism. Is it even remotely reasonable to believe that someone living 15,000 years ago could make decisions about our well-being? The road from agriculture to engineering to science is one that must be traveled to be understood. There is no abstract formula, no ethical framework, and no human imagination that can make the leap.

Suppose you could message back even a paltry thousand years to St. Thomas and tell him that people in our time live far longer, can fly through the air, can talk to each other across the globe, can heal most diseases and can relieve most pains. He might be impressed. Then tell him that no one takes religion seriously, that we view Christianity as a kind of superstition, that people lack any real ethical system and measure their lives almost exclusively by wealth and fame, that we don’t think gender is real, and that we value homosexual relationships equally to heterosexual ones. He would likely be certain that we have made a terrible, Faustian bargain, trading our souls for the worst kind of narcissistic baubles and delusions. He would, I am sure, be committed to doing whatever it takes to prevent our society from coming about. Indeed, he would probably be committed to preventing even the kind of value change that MacAskill argues we must preserve to prevent value lock-in. And yet, I’m pretty sure St. Thomas was a very good man and so, too, I imagine is William MacAskill who thinks our values a considerable improvement on those of Medieval times.

There might be some among us that agree with Aquinas, and while we, looking back, might find things to borrow from Aquinas, he cannot do the same. He cannot, in his imagination, travel the road we’ve traveled. He cannot experience the scientific revolution, the wars, the history and the arguments that we have experienced. Transformative experiences are just that. They must be experienced because they change us and this as true for the collective us that comprises a culture as it is for the individual living a life.

It is no different for us looking into the future. It is absurd to think that we can walk their journey in our imagination. To ask us to plan for the far future is to ask the impossible. There is good reason to be skeptical of everything about longtermism that focuses on value change. Still, if we cannot reasonably predict what values our future selves will have, surely we can at least be concerned with having some form of future selves.

Unfortunately, longtermism creates strange problems for itself – problems that MacAskill sometimes acknowledges but doesn’t really resolve. Because the real long-term is so darn long, even events like human extinction might just be a blip. For the true believer in abstract utilitarian thinking, it doesn’t matter if the sentient beings experiencing well-being are human or whatever creature next involves enough neurons to dominate utility calculations.

“For human extinction to be of great longterm importance, it needs to be highly persistent, significant, and contingent. Its persistence might seem obvious. If we go extinct, we can’t come back from that.”

Except, of course, some other species might evolve and given the long term, what difference would it really make? So MacAskill has to argue that re-evolution might not occur. “We therefore cannot be confident that, if human civilization were to end, some other technologically capable species would eventually take our place. And even if you think that there is a 90% chance that this would happen, that would only reduce the risk that a major catastrophe would bring about the permanent end of civilization by a factor of ten: the risk would still be more than great enough that reducing it should be a pressing moral priority.”

If ever there was a compelling reason to think longtermism is absurd, this would be it. Any theory that requires us to delve into probabilities and evolutionary science to convince us that our total extinction is a bad thing needs to be re-thought.

Nor does it seem that MacAskill’s argument really requires a technological civilization to re-emerge. Since any creature with neurons counts, if a single neuronal cell evolved the capacity for survival and reproduction and then proceeded to fill the earth’s oceans with sextillion of its kind and lived a decent (net positive) existence, this would be just as good as having a few billion humans running around annoying everybody.

This kind of thinking is apotheosis of a certain kind of moral system. The push for ever-increasing abstraction in moral theories seems to know no limits. Like the battle for purity in a revolution, there is always a new school ready to move the argument one step further down the line. Utilitarianism has been extended from the national to the global, from the human to the somewhat intelligent, and from the somewhat intelligent to the possessor of neurons. Longtermism merely extends it from the actually existent to the potentially existent. Surely all those possible neurons must have moral status too.

It is with such abstractions that we enter the arcana of population ethics. In many respects, population ethics is, as MacAskill writes, “crucial for longtermism”. The essential point of population ethics is thinking about whether there is a moral imperative to increase the number and quality (since they are multiplied together) of future lives. Parfit (and most philosophers thinking about population ethics including MacAskill) think there is. One might think that from a standard utilitarian view, the goal would be to maximize the average happiness of people in a future society. Unfortunately, that isn’t quite right. Given any reasonably large society with a certain average happiness, it would always be possible to add people who are slightly less happy than average but whose happiness is still net positive. Indeed, this process could continue, and you could always increase the total happiness in society by adding more and more people with slightly less happiness. In fact, it would continue until a net new person’s life was so poor as to create negative well-being. That this massive population of barely happy people is better than a much smaller population of quite happy people is what Parfit called the “repugnant conclusion.” And although Parfit does so describe it, it’s a very difficult conclusion to avoid on most straightforward interpretations of utility theory.

There are ways of avoiding the repugnant conclusion (MacAskill discusses Critical Level principles fairly extensively), but none are strongly convincing and MacAskill himself leans toward the repugnant conclusion and the view that our goal is to maximize the total well-being of the future population (number * well-being).

MacAskill puts it this way: “provided a person had a sufficiently good life, the world would be a better place in virtue of that person being born and living that life.” And though “sufficiently good” is open to interpretation MacAskill thinks it probably boiling down to net positive.

MacAskill isn’t didactic about this position or even the idea that wellbeing is all that matters. “You may think that wellbeing is all that matters, morally. This is the view that…I find most plausible: other things can be valuable or disvaluable instrumentally, but only insofar as they ultimately impact the wellbeing of sentient creatures. But philosophers are split on the issue…given the difficulty and the need to acknowledge moral uncertainty, we should consider the trend in non-wellbeing goods…”

And though non-wellbeing goods don’t really lend themselves to measurement or classification, MacAskill is willing to throw them into the probability mix along with everything else. This kind of argument can seem a bit bewildering. Anyone reading Parfit for the first time is going to be dazzled (he really is brilliant) and perplexed. Can this really be true?

In fact, though, there is no reason to think population ethics is in any sense meaningful. There is, first of all, no rational standpoint from which it is possible to judge the wellbeing of a life – even from an internal perspective. Given the fact of transformation, we would have to ask which version of a person gets to judge? Is it the person in the second before they die? Is it the average view of the person throughout their life? If a person starts out thinking their life is worthless and then decides it’s worthwhile should we classify their net well-being as negative? In a world where how people assess their own wellbeing and what they value can change with experience (and that’s the world we live in), the question isn’t how can we maximize wellbeing, it’s what do we think people should value. The fact of transformation makes the idea of assessing a life’s wellbeing nonsensical.

And surely the same argument applies to a whole society.

Consider this sort of modern parable. We have a large society on the brink of environmental collapse. In version A of that society, there is a global climate catastrophe. Billions die. The shock of that catastrophe fundamentally changes the world view and thinking of every surviving human. They gradually rebuild their society but in a smaller, environmentally stable, eco-utopian form that could only be developed or sustained (given human nature) in the wake of a terrible calamity. In version B of that society, people avoided climate collapse by retreating from the material into the digital world. People reduced their environmental footprint by spending increasing amounts of time in the metaverse. Gradually, one’s entire life was spent in the metaverse with robots harvesting sperm/eggs to create biological offspring who exist with their parents in large vats but live their experiential lives in a digital universe.

In both versions, the members of the society report high levels of well-being.

Are we to believe that the key ethical difference between these two civilizations is based entirely on which can support more people? Does anyone actually believe this?

As a society, we pick what kind of people we value. And if we can keep creating those people, then surely MacAskill is right that every additional life brings value. The more the merrier. But why, regardless of how content they are, should we be committed to creating a society of people we don’t like?

There is no rational imperative here. Rationality cannot tell us what to value. It cannot impel us to believe that the wellbeing of sentient creatures is all that matters. It cannot impel us to believe that for ourselves (we all routinely make decisions that belie this idea) or for anyone else. In fact, the only thing rationality can tell us is that there is no possibility of beings like us, who change with experience, ever having a single stable or predictable notion of wellbeing.

At every level and in every way, longtermism fails to account for transformative experience and the reality that people can and do change what they value and what their idea of wellbeing is. It fails to account for it at the global, historical level with population ethics, and it fails, even more dramatically, to account for it at the personal level.

Consider MacAskill drawing on his youth to explain why we should pay attention to the future.

“I was reckless as a teenager and sometimes went “buildering,” also known as urban climbing. Once, coming down from the roof of a hotel in Glasgow, I put my foot on a skylight and fell through. I caught myself at waist height, but the broken glass punctured my side. Luckily, it missed all internal organs. A little deeper, though, and my guts would have popped out violently, and I could easily have died…The risk of death I bore as a teenager and the intellectual influences that shaped my life mirror the two main ways in which we can impact the long-term future.”

Note that MacAskill assumes that his philosophical mentor changed his values, but buildering only risked his life. But that can’t be true. Surely buildering shaped MacAskill too – and possibly in ways every bit as fundamental as philosophy. It may be that falling through the roof was the catalyst for a way of thinking that has massively shaped his interest in longtermism. Experience changes us. Physical experience as well as intellectual experience. And because it is experience that shapes what we value, the idea that we can sacrifice experience and remain unchanged is simply wrong.

Sadly, the error works the other way around too. MacAskill, like any good longtermer, is an advocate of effective altruism. It would be churlish to blame Sam Bankman-Fried on effective altruism. Any creed and any idea may be the home of idiots and frauds. But it would also be wrong to assume that SBF was always a fraud or that the fundamental mistake of effective altruism is not writ large in his fall. If we ignore the possibility of personal change, then it may well be that the most effective path to making the world better is to make billions in crypto and donate big piles of money to the political causes you espouse.

MacAskill says as much and, in one of the least convincing parts of the book, argues for the potentially massive impact of voting, working on campaigns, and donating money to political causes. It’s an impact that all personal experience of political creatures belies. The people frantically engaged in politics are not people one is inclined to admire. As SBF discovered, what you do changes who you are. If you want to remain an idealist, you can’t become a crypto billionaire (or a political factionalist).

Live like the lotus, says Buddha, unsoiled by the world.

We live, of necessity, in a very muddy world. But if the advice is taken to mean that we can dive safely into the worst cesspools around us yet be protected by our moral purity, it is surely misconstrued.