A central concern for TW2BR is the impact of transformative choice on aspects of philosophy beyond rational choice and decision theory. Life is an endless series of decisions. For centuries, philosophers and economists theorized that we make these decisions in a straightforward and rational manner. We have preferences. We like or value some things more than others. And when we make choices, we pick the alternative that will deliver the best outcome based on our preferences. The outcome, in the language of economics, that maximizes preference satisfaction. Since this model of rational choice is preference neutral, it can be used for any type of decision-making. Our values may be altruistic or honor focused, hedonistic or duty based. Whatever we value, it makes sense to optimize that value.

This is the way we’ve all been taught to make decisions, especially big, complex life decisions. Create a checklist. Weight the value of the various outcomes. Assess risk. Then choose whatever will likely result in the best outcome for you.

Yet particularly for those big life decisions, it’s wrong. Not wrong in the “people aren’t rational” way, wrong in that it isn’t a coherent notion of rational choice. When we choose big life experiences, those experiences will change us: who we are, how we think, and most importantly, what we value. That’s a critical problem for a decision-making framework focused on optimizing our preferences. How can you choose which outcomes will deliver what you like the best, when what you like the best will change based on your choice?

Decisions whose outcomes involve significant personal change are called “transformative choices”; they form a distinct set of cases that cannot be handled by our existing model of rational choice. Enlistment. Choosing a college. Having a child. These experiences will fundamentally change who you are and what you value. If we can change what we value, then what we choose can shape who we are. That’s what makes these choices the most important decisions we ever face.

For a detailed examination of this issue, read “Mind the Gap” which provides an overview of L.A. Paul’s work on transformative experience and some exploration of its immediate ramifications.

The idea of transformative experience is a huge challenge to utilitarian ways of thinking. In this article, those challenges are explored from the perspective of a personal ethics. Whatever the merits of a utilitarian public policy, what matters first and foremost to each of us, is what ought I to do. And the question is whether that answer can come from a utilitarian perspective when transformative experience is involved.

The basic maxim of utilitarianism is that the right decision in any circumstance is the one that maximizes the overall utility (preference satisfaction) of all concerned in the resulting outcome. This is often simplified as choosing the outcome that makes people happiest.

That’s a beautifully simple theory of moral decision-making. It’s intuitive, straightforward and easy-to-understand. It also fits nicely with the model of preference optimization and practical reason. If practical reason is all about optimizing a single preference set when one person is involved, then utilitarianism is all about optimizing preference sets when two or more people are involved.

Utilitarianism starts with much that is already familiar in vocabulary and foundational elements from the model of practical reason and preference optimization. Preference sets are identical in conception. The evaluative procedure is identical in execution. The only difference is that a utilitarian decision is executed across multiple, independent preference sets. That may sound minor but it’s a significant leap.

From the perspective of an individual decider making a preference optimization decision, a preference set is a given. What I want is…what I want. The accent is on the I. People value what they value, and the optimization problem is getting the most of what they, themselves, value. Nor does the certainty that we value other people and their concerns make any difference. Because while almost everyone does value other people, that doesn’t mean they value other people equal to themselves or that they value everyone else equally. Utilitarianism needs both those things to be true. For most people, there is no natural item in a preference set that is of the form: value whatever Person X values as Person X values it. So while our preferences are reasons for US to decide a certain way, it’s not at all obvious why someone else’s preference set is also a reason for us. The essential step in utilitarian morality is establishing that reason.

Treating utilitarian thought as a personal decision-making morality requires something more than a concept of fair-shares. Moving to an impartial position is natural and intuitive in democratic public policy. A public policy decision-maker is often specifically tasked with finding some kind of impartial or aggregated point of view.  Yet a personal decision-maker doesn’t have institutional stakeholders demanding equal status in his/her decisions. To get to a personal utilitarian ethos, an individual must create those stakeholders by granting them a status equal to one’s own preferences in preference deliberations. In actual practice, we almost never behave this way; it seems unlikely and even bizarre. To make it reasonable requires very strong claims about preference sets and about the nature and requirements of rationality.

Rationality demands that we have reasons for action. From a personal perspective, preference optimization provides a good reason for decision and action. Nobody doubts that ordering a strawberry ice-cream cone because you like strawberry is reasonable. When it comes to dealing with other people, however, it’s unclear just how far your personal preferences can take you. Your preference sets are, presumably, best for you (or at least you think so). Person X’s preferences are, equally, best for them. When deciding about our actions, we think that the fact that a preference set is OURS is reason enough. Nobody orders strawberry ice cream because somebody else likes it better than chocolate. If our actions impact others, perhaps we need something more than that a preference set is ours for it to be a good reason for action. We need a reason to prefer us getting what we want to Person X getting what they want. In everyday decision-making, we have that reason. We value ourselves more than Person X.

When it comes to reasons and rationality though, not just any kind of reason will do. A reason can be no reason (“because”), a bad reason having no explanatory force (“you’re black”), or a good reason based on mistaken data (“you shouldn’t sail West for too long or you’ll fall off the edge of the Earth”). Trying to figure out what makes a reason good is incredibly complicated and hard to establish. But if we have a hard time determining what makes a reason good, we’ve been more successful in identifying some common ways that reasons go wrong. High on that list is personal bias. So, while it makes perfect sense to suggest that whatever preferences you satisfy for you should be based on your preference set, when it comes to whose preferences should get satisfied the fact that you are the one deciding may not be a reason to pick your own preferences. Utilitarians want to suggest that saying “because it’s about my life” is no different than saying “because I want it” when arguing over who gets a ball on the playground. Your preferences are not more important than anyone else’s.

This demand for impartiality between your interests and everyone else’s is a key feature of every utilitarian argument. And while it’s plausible at the public level, it’s a big mouthful for an individual decision-maker. The utilitarian has to claim that of all your preferences, the only one that doesn’t count is the most fundamental – the value you place on yourself.

Perhaps that’s why the only real-world utilitarian’s I’ve ever met are my wife and daughter, whose conversations often go like this:

W: Do you want me to help?

D: If you want to help

W: But do YOU want me to help?

D: If YOU want to help

W: But do YOU WANT me to help?

D: If YOU WANT….

Which pretty well sums up utilitarian decision-making in a world of self-altering people.

There is widespread recognition that a pure personal utilitarianism is extremely demanding. It usually gets softened with a more rule-based interpretation that admits that people are generally better at taking care of their own preferences. Even as an ideal, though, it requires people to accept at least three claims. First, that preference sets are equally valuable and in essence undifferentiated. There can be no good or bad preference sets or utilitarian optimization would be stupid. There just are preference sets. Second, it must be the case that at least some other people’s preferences have an equal claim in your decision-making to your own preferences. How large this group is may vary from fellow citizens in an organized civil society to every creature that feels pleasure or pain in the known universe. And finally, it must be true than no other factors can trump preference satisfaction in decision-making.

Of the three, the essential comparability of preference sets looks like the most reasonable. One very popular way to establish the comparability of preference sets is to ground utility in happiness or pleasure. Happiness and pleasure are inherently comparable even if the things that produce them aren’t. One person’s happiness is, presumably, as good as another’s.

But grounding utility in happiness or pleasure comes with a stiff price. It requires that our ultimate, grounding goal is happiness/pleasure – something that isn’t psychologically compelling. It also leads to strange arguments about the superiority of certain kinds of happiness or pleasure to others. Mill suggests that people’s unwillingness to accept being less intelligent in return for happiness is evidence that there are different kinds of happiness. But it looks a lot more like evidence that happiness is not all we care about.

A way to discount preference differences without appeal to happiness or pleasure is to show that preference sets are inherently arbitrary – like choosing strawberry vs chocolate. No one would suggest that their own liking for chocolate is somehow better than someone else’s liking for strawberry. What’s more, the way biology and experience drive preference sets makes it plausible to think that preference sets just happen to us. Plausible because, to a significant extent, it’s true. If preference sets just happen to people, then it’s easy to see why thinking all preference set satisfaction is pretty much equivalent.

On the other hand, if there were reasons for having preferences and those reasons could be good or bad, then this first and easiest condition for the utilitarian argument will collapse. If you’ve reflected on your preferences and have reason to think them good, and you’ve similarly reflected on Person X’s preferences and have reason to think them bad, then you undeniably have a reason for thinking that it’s better to satisfy your preferences than Person X’s. And yes, you might have a reason to satisfy the preferences you think Person X should have, but given that this will give Person X no utility, it’s unclear how that becomes a case for utilitarianism.

Clearly, transformative decisions destroy the idea that preference sets are arbitrary. If we’ve chosen to become a certain kind of person because we value the skills and dispositions of that type and we’ve selected experiences to facilitate that transformation, then our preference set is anything but arbitrary. And while there may be no objective standard that allows us to assert that one person’s preference set is better than another’s, there is an objective standard that allows us to assert that one person has worked harder to create that preference set and/or has created that preference set with a process that more reliably creates good results. Perhaps that’s too soft a claim to be meaningful, but it’s not dissimilar to the difference in knowledge between a working scientist and an internet quack. We may not be certain that the scientist’s views are better or right, but we can be certain which has worked harder and with more reliable processes to ascertain the truth of those views. Nor need we be embarrassed by this lack of objective status. The utilitarian claim lacks objective status too. The fact that we can’t prove one preference set is better than another doesn’t mean that we’ve proven they are objectively the equal. If you’ve worked to establish a particular sort of preference set (and that’s what self-altering decisions do), then you undeniably have a reason to value that preference set over others.

The second requirement for utilitarian ethics is that other people’s preference set should have equal status to your own in YOUR decision-making. Unlike self-altering decisions, it’s obvious that there is nothing in our basic cognitive processes that makes this demand pressing. It’s unconvincing to suggest that this peculiar type of impartiality is a demand of rational agency. It’s clear that people can be rational without feeling this necessity at all. In fact, almost no one does feel this necessity. People universally feel that their own preferences have special status when it comes to making decisions about their own life. Our decisions are inextricably bound up with ourselves, our needs, and our values. The demand for impartiality doesn’t carry the force of a rational demand; indeed, it strains credulity. It’s hard to see why the value you place on yourself isn’t a good reason for action/decision when every other value you have is action justifying.

Even if we believe that our preferences are entirely arbitrary, we reliably discount everyone else’s preference satisfactions. What’s more, we all discount other people’s preferences differently based on the value we set on other people. There is no formal constraint on rationality or reasons that invalidates this strategy and there never will be. Rationality doesn’t work that way.

Cases where preferences are not arbitrary are even more difficult to parse. Is it possible to say something like “I value compassion and you value sadism, but rationally I should weigh your preference satisfaction equivalent to mine”. Or, to take a less charged example, suppose you care a lot about astronomy but somebody else cares even more about oceans. Take all considerations of whether your passion will make you better at one or the other off the table. Is the fact that someone else cares more about oceanography than astronomy a reason for you? Why? You genuinely value astronomy more. Why, in virtue not of any argument or reason but simply because someone else values oceanography more, would this be a reason for you to pursue it?

When we value things…we value them. This doesn’t mean we can’t reflect on those values. Change our mind about them. Or even work to change who we are and what we value. It does mean that the idea of impartiality about value is confused. We value life more than rocks. We value humans more than fish. We value countrymen more than foreigners. We value friends more than countrymen. And we value kids more than friends. No impartial view can, should or will change any of these things because the values we have are the reasons we give to rationality. We gave up impartiality when we chose to value something.

The utilitarian argument is really nothing more than a way to engender a sense that one particular set of values is best. That’s fine. But no one should take seriously the claim that to reject that specific criterion for value is irrational.

Which brings us back to the third requirement for a personal ethics. The utilitarian argument requires that we think nothing is more important than maximizing utility. Though we don’t commonly accept this to be true, it’s difficult to refute. The model of practical reason and preference optimization is infinitely elastic. It can very reasonably bake other-directed concerns into a preference set and even self-altering preferences can be shoe-horned in. Since whatever we think is most important automatically becomes most important in our preference set, this third requirement can be reduced to tautology.

If it isn’t taken to be tautological, then this third assumption is just wrong. If self-altering (or other-directed) preferences aren’t considered part of the preference set, then there is no plausible path to preference satisfaction trumping all other decision-rules. If they are baked in, then this third assumption is safe but only at the cost of requiring the utilitarian to accept reasons for action that look very non-utilitarian.

Including other-directed preferences in utilitarian preference sets also creates some strange and dislikable outcomes. Given the existence of other-directed preferences in a group, a popular person will benefit doubly from utilitarian calculations since not only will their self-interested preferences count equally with everyone else’s, but their happiness will create more preference satisfaction in the group. This provides strong utilitarian ground to steer extra benefits to the most popular people and away from the least popular people in a group since that distribution will maximize overall preference satisfaction. Possibly even more strange, the altruistic person who cares less about their own satisfaction will be particularly disserved since satisfying their personal preferences will include helping others – meaning their altruism effectively works against their other preferences.

While neither of these outcomes is contradictory (and both are readily observable in certain group dynamics), it’s hard to make the case that they are productive of a healthy culture.

Regardless of how you feel about the force of the utilitarian move from self-interest to impartial preference optimization, the cognitive account encapsulated in adaptive learning and preference change is devastating to the first utilitarian assumption – that preferences just happen and that we should think about them as equivalent across all people.

In a world where many of the most important decisions you make are self-altering and transformational, the utilitarian focus on optimizing preference satisfaction doesn’t make a lot of sense. Transformative decisions force us to think about what kind of person to be and what kind of person to value. They force us to think about what kind of preference set to have, and once we’ve done that reflection, it’s impossible to retract it.