From an infinitude of micro-decisions to the biggest life-shaping decisions, we face a never-ending parade of choices. There’s a powerful, flexible, extraordinarily robust theory of rational decision-making that explains how most of these decisions get made. We have a preference set and we make constrained choices to optimize the expected value of the outcome.
Part of what makes this model of decision-making powerful is its neutrality with respect to what the decision is about or what people should value. It says – and is determined to say – nothing to the question of how people get the preferences they use to assign value to outcomes. The model works from a set of preferences to a decision. In many ways, that makes what we value far more important than how well we decide.
But if preferences are a given, where do they come from? Biology, genetics, culture, and micro-culture provide the foundational sources for our preferences. A foundation, though, is not a finished structure. That structure is finished by experience. Our experience in the world builds on the foundation of biology and within the framework of culture to provide the enormously rich variation in preference sets that we observe in people. What we value is shaped by the experience we have, because the experiences we have change who we are and how we think. That’s what it means to be an adaptive learner.
This process of personal change challenges the way we think about decisions. L.A. Paul’s 2015 book Transformative Experience first explored the problem in decision-theoretic terms. If an experience is likely to change how a person thinks and what they value, then on what basis can a decision-maker choose it? We can’t possibly know whether or how we’ll value the outcomes once we’ve had the experience. We only know what we value right now.
Many of the biggest decisions we make in life are clearly and undeniably transformative: enlistment, choosing a college, or having a child. These experiences will fundamentally change who you are and what you value. And when an experience is transformative, it’s hard to see how the standard model of rationality can work. That’s a big and surprising idea. The model of rational decision-making is ubiquitous and deeply embedded in our approach to the world. That it cannot usefully be applied to our biggest and most important decisions is shocking.
The Standard Model of Rational Decision-Making
A decision is always a choice between two or more actions. Eating or skipping a meal. Going to restaurant vs. eating in. Choosing one of the thirty-seven restaurants within easy driving distance. Ordering the burger, the big salad or the pancakes. At every step, we’re thinking about what the outcome of each action will be like for us. Whether we’ll be starving if we skip a meal. Is it too much trouble to cook? What kind of food do we feel like? How was the burger last time we had it? How much will we have to pay and does that mean we can’t get a Boba tomorrow for snack?
Given this kind of decision, we’ll choose the course of action that seems likely to deliver the best outcome for us based on what we like, what we think will happen, and what everything will cost in terms of time and money. Most of the time, we don’t go through a formal evaluation of factors, probabilities, and expectations. But even in small decisions like whether to eat out or which restaurant to eat at, we are engaged in this kind of evaluative exercise even if we’re simplifying it by using rough-and-ready rules (like finding the first alternative that satisfies your main decision criteria).
When you think about a decision, you are considering alternatives, evaluating likely outcomes, and choosing what seems best given what you like. This just makes sense. It’s what we mean when think about decision-making as good or bad, rational or irrational.
A “rational” or “good” decision doesn’t necessarily work out. Your favorite restaurant might have an off-night. You might pass on the lottery ticket that was the winner. Or you might have bought it – and who cares about your bad decision! Still, casinos and lotteries exist for a reason. The world is not random and in any non-random world where some knowledge of outcomes and probabilities can be generated, you’ll likely do better by being as “rational” as possible.
Admittedly, this notion of “better” is specific to you. Rational decision-making is neutral with respect to any kind or relative assessment of value, so someone else’s rationality may not be good for us. If Ted’s most desired outcome is to cause you pain, the less rational he is, the better. We only have reason to care about other people’s rationality if we happen to endorse their preferences. Still, every decision-maker has at least a personal stake in being rational.
To be rational, though, we must be able to tie action to outcome, decently assess the probabilities of each outcome, and assign some personal value on the outcome. Tying action to outcome is usually possible. Assessing the probability of possible outcomes for an action can be difficult and it’s one of the most common and well-canvassed failure points in rational decision-making. Yet though we are often poor at assessing probability, this is a practical problem not a fundamental limitation. With more facts, more work, and more careful thinking, we can make better assessments of probability. So, when it comes to probability assessments, there is usually a path toward more rational decision-making. Putting a personal value on the outcome seems like a slam dunk. There is no wrong answer…just avoid Galahad’s mistake picking his favorite color in Monty Python’s Holy Grail (“Blue. No. YeLLOOOOOOOOOOOOW”).
But even though assessing the value of the outcome is entirely up to us, that’s the part of rational decision-making that turns out to be a problem.
Assessing Outcomes and Epistemic Newness
How do we know what we value? Put baldly, this sounds like an odd question. But while we may be born into desire, we are not born into the knowledge of what best satisfies it. Knowing hunger is different than knowing what foods you like. When Tigger first appears in Winnie-the-Pooh, the animals try to figure out what to feed him. Pooh, of course, suggests honey. “Tiggers love honey,” exclaims an enthusiastic Tigger. Until he tries honey. Ditto with Piglets haycorns and Eeyore’s thistles.
We have three sources of information when it comes to evaluating what an experience will be like. The first, most basic and almost always the most reliable, is personal experience. Once Tigger tried honey, he knew he didn’t like it. Of course, no single experience is entirely replicable. You might like the Wildflower Honey from a Sonoma Farm but be less enthralled with the manufactured-in-China glop called honey you purchased at the local supermarket. Even that farm honey might taste different after a year on the shelf, after a big swallow of beer, or when you’re in a sour mood. Nevertheless, we constantly deploy a kind of induction and a concept of similarity to extend the range and applicability of past experiences to assess all sorts of new – even fundamentally new – outcomes.
Because the notion of similarity is vague, extensible and can be focused on many aspects of an experience, people can make plausible assessments of outcomes even for experiences that are quite different. There’s little similarity between white-water rafting and hiking but to the outdoorsy person or the determinedly cosmopolitan they may be similar enough to provide a basis for assessment. Likewise, to the thrill-seeker, white-water rafting and going on a spaceflight might end up being similar enough to aid in assessing outcomes. That doesn’t make similarity infinitely extensible – at least with any psychological plausibility. There must, for a decision-maker, be some important factor that can be judged similar enough to a past experience to be meaningful. If there isn’t, then personal experience won’t cut it.
For Tigger, who doesn’t know what honey is and has never eaten anything like it, personal experience provides no guidance. As we see, however, Tigger (and each of us) are not thereby bereft of information. We often rely on the opinions of others to make reasonable value assessments. If Pooh likes honey, perhaps Tigger will too. If Pooh, Roo, Eeyore and Piglet all like honey, there’s an even better chance Tigger will enjoy it. Social assessments are not as reliable as personal history since our reactions are individual and distinct. But when an activity is overwhelmingly popular or disliked or when you have good grounds for thinking that the person giving the assessment has similar reactions to your own, social information can easily sustain rational assessments of the value of the outcome. Most bears like honey. Few parents need worry that their five-year old will dislike Disneyland. And if you find that A.O. Scott’s taste in movies is very similar to your own, then a glowing review by Scott is strong positive evidence for seeing the film.
Finally, we can sometimes assess an experience with our plain knowledge of what things are like in the world. In Transformative Experience, Paul gives the example of being eaten by sharks. It’s hard to know what getting eaten by a great white would be like, but not hard to know that you don’t want it. This kind of knowledge isn’t generally useful for choosing among alternatives – it’s more likely to set boundaries on what’s in a choice set at all.
There are rare cases when none of these three sources provide useful information. Tigger has, it seems, never eaten before. He has no relevant personal experience. And while Pooh likes honey, Piglet and Eeyore don’t. There is no social consensus. There are no other tiggers about. And there is certainly nothing in our plain knowledge of the world that will help Tigger decide what eating honey will be like for him.
As we accumulate a rich store of experiences, however, this kind of epistemic newness becomes quite rare. You have to work hard to imagine cases where a normal person has no relevant personal experiences, no useful social assessments and no plain knowledge markers. But such cases do exist and aren’t necessarily that esoteric.
Consider the durian.[1] The durian is a notoriously controversial fruit with a foul smell but a taste that some people find quite appealing. If you’re deciding whether to try a durian or instead have some dish you know you will like, assigning a subjective value to the taste experience is nearly impossible. Many traditional markers that we use for comparability (liking fruit, having a sophisticated palate, etc.) don’t apply to the durian. You may love it or hate it and it’s impossible to know which will obtain. Hallucinogenic drugs provide a somewhat similar dilemma. It’s very hard to know what an experience with LSD would be like, and the testimony of those who have tried varies wildly and unpredictably.
Obviously, experiences live on a continuum of assessment. There are things we have done many times before. Others that we have done once or occasionally. Some experiences are very similar to ones we have personal experience of. Others have strong social markers. Most every experience has at least some relevant social markers. For most of this continuum, we think rational decision-making is possible. But at the far end where nearly total ignorance lives, it may be impossible to assign a personal value to the outcome. That means one of the three conditions for rational choice doesn’t obtain.
Fortunately, at least when it comes to low stakes decisions like trying a durian, there are other ways to think about the problem. Given that we rely on past experience and similarity to project future outcomes, there is value in having new experiences. Theories of reinforcement learning often set aside a certain percentage of choices designed specifically to explore a choice set. Not only can this exploration discover strong preferences (I love durian!), it can provide more grist for the experience mill – allowing us to make better subsequent decisions. Even if we never intend to eat durian again, we may simply want to have the experience of trying durian (I tried a durian!).
With bigger decisions – especially ones that involve transformation – the idea of exploring a choice set is much less fruitful. We’re usually not going to try a transformative experience again, and even if we do, we’ll be a fundamentally different person. Nobody has a baby to see what having a baby is like. With transformative decisions, you aren’t exploring a choice-set, you’re making a one-time, very important, largely irrevocable decision.
Of course, big life decisions almost always present us with some basis (relevant past experiences or social evaluations from relevantly similar people) for assessing what the outcomes will be like for us. These assessments may not be highly reliable, but they give us at least a reasonable basis for assessing which outcomes we’ll like best.
At least they would, were it not for another, bigger problem, inherent in the very idea of transformative experience.
Personal Change
Our brains are nothing like a computer running an algorithmic program. They are learning machines whose connection structure is changed by experience. Most experiences will change us in tiny, almost imperceptible increments. But when a decision involves either a hugely impactful experience (combat), or a whole set of experiences that tend to work in similar directions (basic training), the end result may be a substantial change in the way you think – including what you value. That kind of experience is what L.A. Paul called a transformative experience. The “you” on the other side of the experience has a very different set of preferences and values than before the experience.
Transformative experiences don’t happen every day, but they aren’t rare. Combat, being mugged, nursing a baby, getting cancer, falling in love, winning your first case, having a religious conversion – life is filled with big experiences that fundamentally change who we are. Many of the biggest and most important decisions we make about our life commit us to these kinds of experiences. Enlisting in the military. Going to college. Choosing a college. Choosing a career. Having a serious relationship. Getting married. Having children. The decisions we care about most, think about most, and that make the most difference in a life are all going to create experiences that are transformative.
Experience selection – what we choose to experience – changes who we are. And that’s not the only significant fact about this kind of decision to be taken from cognitive science. The opacity of our thought and our inability to effectively imagine what an experience we haven’t had is like, make it nearly impossible for us to assess what a transformed version of us will feel or value.
To see how crippled we are in this respect, think about having a baby. No big decision is better canvassed by our society. We have countless books, movies, support groups and personal examples from which to work. None of it helps.
From a cognitive perspective, having a child triggers large-scale changes in our embodied brains. These triggers aren’t replicable and there is no way in advance that we can understand how WE will change or what those biological changes will make us feel. For the mother (and even the father), the experience of having the child and the nine-months of worry, anticipation, education, and sharing are equally transformative. This time can’t be simulated. It will change a relationship and it will change each partner in the relationship. The biological and lived experience of pregnancy is profound, and it builds a unique and intense emotional investment in the child. You will fall in love with your child, and it’s a kind of love very different than any romantic experience you’ve ever had. What it is like cannot be understood in advance because what you will be like cannot be understood in advance.
This ignorance is unavoidable despite having every conceivable bit of information about the experience. No matter how much a prospective parent reads about having a child, no matter how much time they spend with their nephew or niece, and no matter how many stories they hear from their parents or friends, it is impossible to imagine what being a parent will be like. Those subjective outcomes are a closed book.
You can assess every single specific experience involved in having and raising a child, from the painful to the disgusting to the enraptured. And you can imagine what each of those moments will be like and what value they will have. But you can only imagine those things based on who you are right now, and that’s not what it will be like. If you are wise like Socrates, the only thing you’ll know for sure is that you really don’t know anything.
This isn’t a problem of inexperience either. You can have a baby throw-up on you or cry in your face or smile at you. You can have the specific experiences you’re trying to imagine. It won’t help. The problem isn’t that the experience is new, it’s that those experiences will happen to a new person. Having a baby changes who you are, what you care about and how you think. Our cognitive architecture ensures that this must be the case and every reasonable observer of human behavior over time must see that it is so.
Transformative Choices
If you understood exactly what the experience was going to be like, you might be able to more effectively imagine how it would change you. This might allow you to better predict what your future preferences will be like. Surprisingly, though, it isn’t clear whether even perfect knowledge of the experience and how it will change you helps very much.
Because if you can predict what your future preferences would be, it’s unclear what the rational way to account for those preferences is. Perhaps you think that future you will be just fine with baby barf though to current you it seems disgusting. Does that mean that you should ignore your current disgust (or discount it radically) based on the assumption that your preferences will change? But if so, what will you say to your future addict self who just wants another hit? Nor does the problem go away even if we banish cases where some future version of ourselves is morally repugnant.
Consider two versions of your future self, one of which goes to MIT and becomes a successful software engineer. The other goes to Princeton and becomes a successful attorney. There’s a good chance that MIT-you will have a strong preference for engineering and perhaps will consider corporate law a parasitic tumor on civil society. The Princeton-you may well prefer legal or government work and consider software engineering a form of electronic plumbing.
In this situation, you have every reason to expect that whichever path you take, your preferences will change in ways that make the resulting experiences satisfactory. Given two fundamentally different sets of preferences each of which is well-satisfied by the life that created them, how can maximizing outcomes guide a choice? Whose outcomes are maximized?
In fact, once preference change is baked into decision-making, the whole business of assessing outcomes begins to feel like traversing a house of mirrors. Suppose – and this is far from unlikely – the outcomes for MIT-you and Princeton-you are reversed. Suppose MIT-you will find engineering a mechanistic trap and much prefer thinking about law or government. And Princeton-you will find law vapid and stultifying and long to build something useful. Now the resulting preference sets from each choice are such that you’d prefer to have made the other choice!
It doesn’t end there.
Since a transformational choice plays out over time and changes who you are, it leaves you in a fundamentally different spot than any alternative. You can’t take it back. You can’t get back to who or where you were. Not only are the prospects and horizons of MIT-you different than Princeton-you, but now those future versions of you will be making transformational choices too. And those transformational choices will be bounded by the differing selves of MIT-you and Princeton-you.
A complete assessment of outcomes would force you to consider what choices the M and P versions of you might make. Perhaps M-You will be able to make a new transformational choice that will create far better outcomes than P-You. Or vice versa. But maybe not. And what if the transformational choices that exist for that much better off version of M-You2 are somehow then much worse than options that are now open to P-You2? And what does it even mean since M-You’s perception of M-You2’s outcomes won’t be yours so the whole idea of “much better” doesn’t make any sense.
The game is absurd.
There’s no reason to favor current preferences that you know will change before they come into play. Yet in the very common situation where you are choosing between multiple different preference sets, there is no fixed frame of reference. Paul touches on this problem in her book, but often seems to assume that transformational choice is between two alternatives – a status quo and a transformation. That kind of binary works well enough describing decisions like having a child, but it falls apart in more complex, yet very common, situations.
Decisions like enlistment may look binary but they really aren’t. A typical enlistment decision is not one of status quo vs. transformation but of enlistment vs. college or enlistment vs. getting a job. For a freshly minted High-School grad, every option is (thank god!) transformative. The decision can’t be framed as one between current self and some alternative transformed self. It’s a decision between different types of transformation.
Ditto for what college to attend. What degree to pursue and what career might follow. What companies to apply to. Where to live. Who to marry. What hobby to pursue. What friends to hang out with. Transformative choices do not come in simple binaries. Even the decision to do nothing may be transformative. To cast transformative choice in binary terms is to mistake the nature of the problem and make the problem seem more tractable than it is.
There’s a second respect in which Paul significantly underestimates the problem of transformative choice. Paul assumes that while our preferences will be changed by transformational experience, our rational faculties are stable.
If the standard model of rationality demands stable preference sets (and falls apart when it doesn’t have them), it also demands a stable evaluative process. Traditional views of human thinking have assumed that people enjoy a “rational” decision function that is universal, stable, and reliable. Since it’s obvious that actual decision-making is none of these things, this rational decider is assumed to be counter-balanced by appetitive and emotional forces with which it is constantly at war.
None of this makes much sense in terms of modern neuroscience. Our brains come with a significant amount of built-in high-level structure. In every human, a part of the brain is set aside for acquiring a language. But that capability needs to be trained and developed. People become French or English or Mandarin speakers – and along the way those reserved functional areas will be structured differently by the experience of learning. Inside those high-level structures, almost every aspect of practical reasoning is unique and learned. Whether it’s the imaginative exercise of creating alternatives, calculating probabilities to degrees of risk aversion, or creating associations that drive preference, not only are our preferences different, but the core evaluative functions we use in practical reasoning can change too and are, themselves, learned.
In almost every respect, we learn how to think, and experience may teach each of us very differently. Though we all enjoy a similar level of cognitive hardware (untrained connection systems), it’s the way that cognitive hardware gets trained and used that determines almost everything about our thinking capabilities.
The impact of change on evaluative functions (the way you think) is identical to the impact of change on preference sets (what you value). If the way you make decisions changes, preference selection will change even without any change in the perceived value of outcomes. Just as a new parent may find that many of their old preferences have faded away, a soldier or student may find that the way they generate possible options, the constraints they place on them, the way they assign probabilities, their tolerance for risk, and their imaginative capabilities may all undergo dramatic transformation.
As with all transformative choices, it is quite useless to try and imagine what it might be like to think differently about probabilities or have a significantly different risk function.
Suppose you enjoy whitewater rafting a great deal and like to do it often. You know there’s a certain amount of risk involved, especially in the extreme Class 4 & 5 rivers you like best. But the joy and excitement of the activity significantly outweigh the risk so it’s something you often choose to do. After you have a baby, however, your risk evaluation function changes, making you significantly more risk averse. You still love Whitewater rafting, and your perception of the actual risk hasn’t changed at all. However, with a different risk function, you’re less willing to raft and more likely to choose some other activity like hiking. In this example, neither your enjoyment of the activity nor your assessment of the risk has changed. Only your evaluative function of acceptable risk has changed. The result is a change in preference ordering. This kind of alteration is part-and-parcel of transformative experience.
It’s hard enough to imagine ourselves with different preferences. We really are extraordinarily poor and making these imaginative leaps and it is virtually impossible for us to assess and compare the outcomes in a rational deliberation. But if imagining ourselves with different preferences is extraordinarily hard, understanding what we would do with a different intellectual toolkit is just impossible. If we could do this, we could just think that way. And that’s not the way it works.
How can we possibly choose between two actions that not only change our preferences, but change the intellectual tools that we’ll use to form and satisfy those preferences? There is no way to apply optimization to this kind of problem since the standard model of rationality requires stability in both value and functional evaluation.
We have a way to think about most decisions. A way that makes sense, gives us a standard to live up to, and that can guide us through a process of decision. Except that process doesn’t work when it comes to any transformative experience, and choices about transformative experience are the most important decisions in a life. So transformative experience leaves us with a gap, and that gap is a real problem.
Patching the Standard Model: Choice in Uncertainty
The standard model of rationality isn’t just a big deal in economics or philosophy. It’s become an integral part of how we think about choice. It’s the way we think about decisions – making checklists, assigning scores, picking what seems best. It’s also an extraordinarily flexible model. It makes sense. It works in all sorts of situations. It’s surprising and almost hard to believe that it doesn’t apply to an important class of decision-making.
It’s more than fair, then, to ask whether it’s possible to find a way to accommodate transformative choice in the standard model of rational decision-making. In Transformative Experience, Paul looks at several approaches that might work, the most obvious of which is taking advantage of existing theory around decision with uncertainty. After all, every decision we make involves uncertainty. We don’t spend many decision cycles worrying about whether the sun will rise tomorrow, but it might not. Or we might not. That’s why a dollar today is always worth a little more than a dollar tomorrow. Every decision has some level of uncertainty, and many decisions are made in situations where there is a lot of uncertainty around the outcomes. If that uncertainty is handled by the standard model, why is personal transformation any different? Isn’t it just a case of high uncertainty about the outcomes?
It isn’t. In the standard model, uncertainty is about the probability of outcomes not about preferences. We are always in situations where we are uncertain what outcomes an action will produce, but the model assumes that given an outcome, we know what we’ll prefer. Yet that’s precisely what’s at issue here.
When you don’t how your preferences will change, then no amount of probability assessment about outcomes will help. One way to highlight how different the two problems are is to consider a rational strategy for addressing uncertainty. If we aren’t sure how likely outcomes are, the solution is to learn more about the situation. If we’re uncertain about whether to vacation in Paris or London, the natural action is to find out more about each. What will the weather be like? What attractions might you do? How interesting do they seem? How good is the food? How expensive is the trip? What kind of hotel can you get?
The more you learn about the experience, the better you can assign probabilities to the various factors and the better sense you’ll have of which spot is better for you. This strategy of learning more works in almost every case of uncertainty about outcomes. The knowledge may be easy or hard to come by, it may not always be worth the effort to attain, but it’s always a reasonable strategy for tackling uncertainty.
But it’s not a good strategy when transformative choice is involved. When it comes to personal transformation, the outcomes can be known with extraordinary accuracy. As we’ve seen, most people know exactly what having a child will involve. The social sciences have studied almost any conceivable aspect of the outcomes – from expressed happiness to financial impacts to civic and social investment. Fuller, more complete information could hardly be imagined and yet, because the choice involves personal transformation, none of that information helps very much.
Nor do any of the short-cut strategies that rational decision-makers often use to make decisions with uncertainty help. Picking the least risky alternative. The first alternative that satisfies your most important outcomes. None of these strategies work. Risk assessment is dependent both on outcome assessment and evaluative function. What you consider your most important outcome may change.
The standard model just can’t handle this unique kind of ignorance. There is no additional knowledge (prior to having the experience) that could remove our ignorance. And none of the traditional strategies for choice within ignorance will help solve the problem.
Patching the Standard Model: Abandoning Subjectivity
The problem that transformational choice presents to the standard model of rationality is the seeming impossibility of knowing how you will feel about different outcomes. If that really is impossible, then perhaps a rational decision-maker should abandon any attempt to figure out how they might feel about something and look for an objective, third-person answer about how other people feel about the experience. When it comes to having a child, why not look at whether having a child makes other people happy? If we know that most people who have children are happy about it, then that objective information can be used to make a rational optimization choice. Odds are, we’ll be happy about it too.
There are a lot of problems with this approach. First, for any real decision, it’s implausible to think that social science will yield a definitive answer about happiness. If having children made everyone happier (or less happy), we’d probably have figured that out by now. So in any real-world decision, the answer is always going to be probabilistic. That seems fine, we deal in probabilities all the time. But if the research suggests that people who choose not to have children are, on average, slightly happier, then it would seem that the rational choice for everyone would be not to have children!
That can’t be right.
Of course, good social scientists don’t deal in universals any more than do good decision-makers. There must be some markers in the data that would identify a population subset more like us – and whose resultant happiness would be more predictive. Applying segmentation removes the unfortunate universality of the solution and, if the segmentation is fine-grained enough, it might help people figure out whether having a child is likely to make them happy.
If those markers are unrelated to our own preferences, however, using objective data still feels problematic. Suppose that really wanting a child is not correlated with being happy after having a child. Or vice versa – that hating the idea of being a parent is not strongly correlated with happiness if choosing to refrain from having children. Perhaps the relevant correlates are related to income level, specific-career choice, and type of college degree. Does it make sense to ignore your own preferences in this kind of life decision?
Paul argues that making even well-supported and highly segmented decisions based on objective data is problematic – raising issues of authenticity. “If you ignore your own inclinations and choose to become a doctor because society values doctors, because it’s a job that involves helping people, because your parents wanted to you become a doctor…intuitively, it feels like you are making a mistake.
But though most of us have a strong disinclination to this kind of decision-making, that doesn’t prove it’s wrong. You may choose not to become a doctor because you hate the sight of blood (personal subjective inclination). But perhaps a few days in medical school would completely remove this arbitrary preference and you’d find that being a doctor suits you very well. Objective evidence that most people who became doctors despite a strong initial revulsion to blood ended up satisfied with their career choice is an important data point. The same might be true even for less arbitrary initial preferences.
We rarely have sufficient social science data to make good objective decisions, but if the available data is sufficiently detailed and compelling, perhaps concerns about authenticity shouldn’t dominate. And unlike the case of uncertainty and ignorance, this seems to suggest that there is a path to making better decisions – get more detailed, fine-grained objective data about other people’s initial preferences and satisfaction with outcomes.
Unfortunately, this too has problems.
Social science data about outcomes cannot use objective standards of measurement like wealth or status or accomplishment (to whatever degree accomplishment can be objectively measured). Those objective standards don’t necessarily correlate to either initial or final outcome preferences or to psychological states like happiness. It may be that having a child tends to reduce your net wealth and in in your initial preference set that’s very important. But having a child may re-shape preferences in ways that make family more important than wealth. No objective measure of outcomes is meaningful in assessing the subjective value to the people in question.
So the only means of measuring outcomes will be self-reported subjective value – asking people about their level of happiness or satisfaction. This is interesting but seriously problematic data. If a transformative experience tended to make people less honest, more vain, less ambitious or any of a host of other qualities related to the reporting of happiness/satisfaction but not necessarily to the thing itself, then we’d be chasing willow-wisps.
Even more fundamental to the question, though, is whether, even if it were 100% guaranteed that making a choice would make us objectively happier, we should do it.
It may seem like that’s the definition of rationality we’ve been working with, but It isn’t. The definition of rationality that we’ve been working with is one that maximizes our preference outcomes. This may certainly involve happiness or pleasure but there is no reason to think it must or even mostly does. There is nothing in the standard model of rationality that insists that our preferences must be tied to any psychological state. Not pleasure. Not happiness.
And as a matter of observable fact, neither pleasure nor happiness are good stand-ins for preference satisfaction or value. As humans, we regularly make decisions that optimize to values other than happiness or pleasure. That we do this seems indisputable and that we do it quite intentionally is equally inarguable. Most people accept happiness and pleasure as goods. But most people also accept making the world better, creating great art, securing the safety of our country, and meeting our obligations and commitments as goods too. Neither happiness nor pleasure are the inevitable byproducts of any of these things – and certainly not to the degree we value them. There is no satisfactory reason to be a monist when it comes to value.
You can’t reduce preference satisfaction to pleasure or happiness, and the fact of personal transformation changes our understanding of outcomes in ways that we are not used to dealing with. The fact that an experience may change us and that different experiences may change us differently alters the way we should and do think about future preferences and their satisfaction.
To see why this is so, consider some examples. Suppose that people who choose not to have children almost universally report higher levels of happiness or life satisfaction than people who have children. It might seem rational to use this as objective evidence that you should remain childless. This is particularly true if you discover that the correlation is particularly strong for people who matched you in terms of stated preferences and key variables prior to having children.
Now suppose that you go and interview 100 people who strongly matched you in initial (pre-child) preferences and life circumstances. Fifty of them chose not to have children and 50 chose to have children. Of those, nearly all the ones who have not had children report higher levels of happiness and satisfaction.
The objective evidence seems overwhelming and is, in fact, convincing. If you want to maximize your chance of happiness, you should not have children.
Now add one fact into the mix. You disliked every one of the 50 people you interviewed who chose not to have children and you liked most of the people you interviewed who had children.
If you think happiness is the gold standard of rational decision-making, this shouldn’t bother you at all.
Which is why happiness isn’t the gold standard.
Consider another example. Suppose that people who report the highest level of life satisfaction given an initial set of characteristics and preferences similar to your current twenty-year-old self are a bunch of greedy old shitheads who assure you that the only way to live is to grab what you can get and screw everyone else. You have every reason to believe that the way such people lived has warped their preferences such that they are satisfied with outcomes that repel and disgust you. So why should you care that they have been able to achieve a high-level of preference satisfaction?
One last example. Suppose it were proven that people who suffered slight brain damage and lost 10-20 points in IQ from any level above 110 were highly likely to be happier after the loss in terms of self-reported life satisfaction. Given that your IQ is above 110, do you think this is a good reason to opt for a surgery that will replicate this damage?
In every case of transformational choice, we care about who we will be, not just how happy we will be. And we are right to do so. People want to be happy, but people want a lot of things that are not about happiness at all.
So, even near certainty about objective happiness doesn’t solve the problem of transformational choice. And if objective happiness doesn’t solve the problem, decisive objective knowledge about choices that involve personal transformation is a chimera. The fact of personal change, the nature of our cognitive architecture, make plain that we cannot hope to ever have objective evidence that one form of personal transformation is better than another. There is no standpoint from which such an objective measurement could be made and there is no objective unit of measurement that we are forced to accept.
Neither science nor social science can or will solve our problem.
Patching the Standard Model: Revelation
Paul advanced her own patch to the standard model of rationality – revelation. Revelation is the idea that we might choose a transformative experience based not on the expected outcomes but because we want to have the experience. She first describes revelation in the context of the durian:
“So you might revise your choice, changing the outcomes you base your decision to act upon, so that the relevant outcome of choosing durian is not the experience of what it’s like to taste a durian, but the experience of tasting a new fruit…you might decide to try eating durian just for the sake of having had the experience of eating durian (See, I tried it!)”
Paul doesn’t explicitly borrow ideas from reinforcement learning and she oscillates between revelation based on acquiring knowledge of your unexplored preferences (which would be a reinforcement learning view) and simply wanting to have the experience to understand what’s it like (revelation). The durian example fits either model well, but Paul generally leans toward the revelatory aspect as most important.
“When we choose to have a transformative experience, we choose to discover its intrinsic experiential nature, whether that discovery involves joy, fear, peacefulness, happiness, fulfillment, sadness, anxiety, suffering, or pleasure, or some complex mixture thereof…On the other hand, if we reject revelation, we choose the status quo, affirming our current life and lived experience.”
Just as with reinforcement learning, however, revelation strategies that seem appealing in a low-stakes world look pretty terrible as the stakes go up. People have sometimes enlisted so they could tell their friends “See, I tried it!”, but this is no model of rationality.
In a decision-space with big, irrevocable decisions that play out over many years, trying things just to see what they are like or to have the experience can’t be a viable model. That means reinforcement learning methods won’t apply and it’s hard to see how revelation is much better.
Even if you think that people might rationally make big, one-time, life-changing decisions on the basis of “wanting to know what the experience is like”, there is a deeper problem with Paul’s idea of revelation. Her strategy of casting decisions about transformative choice as a binary doesn’t really work. Treating the choice as binary is critical because we don’t know the quality of the revelation (that’s an outcome) – so we have to compare revelation vs. non-revelation. We have to want the knowledge of the experience, regardless of what the experience is like.
This strategy of casting the decision as a binary (transformation vs. status quo) works with decisions about a durian, but very few transformative choices are binary and even ones that appear to be binary often aren’t. Paul’s model of revelation gives us no way to think about this kind of choice. She consistently structures the decision as one between revelation or status quo:
“When faced with each of life’s transformative choices, you must ask yourself: do I plunge into the unknown jungle of a new self? Or do I stay on the ship?”
And
“This takes us back to the position…where decisions are framed, not in terms of comparisons of the details of different ways of experiencing the world, but in terms of the value and cost of revelation. When you make a transformative decision, what you assess is the value of revelation, that is, you choose between the alternatives of discovering what it is like to have the new preferences and experiences involved in the transformative change, or keeping the life that you know, and you may only be able to make this assessment by relying on very general and abstract facts.”
And this is no oversight. It’s the only way revelation can work.
But not only isn’t keeping the life you have usually the question, when it comes to big decisions even staying on the ship often involves transformation. We cannot stay the same person as we age between 20 and 30 (much less 20 and 60) no matter how consistently we pick the status quo. Staying on the ship isn’t the same thing as staying the same. The voyage to thirty will be revelatory no matter what you do. So even choices that seem binary turn into comparisons of alternative transformations without any possible choice of a status quo. It’s no surprise that for an 18 year old, nothing could look worse than being thirty and living at home. Our voyage across time demands transformation. And since we have no way to model the value of alternative revelations, we have no way to compare them in the standard model and no way to choose rationally.
At the most meta-level, Paul argues that “If you have had epistemically transformative experiences in the past, you can draw on what you know from such experiences to determine, for you, the subjective value of having epistemically transformative experiences.” If you liked being transformed in the past, you’ll like being transformed in the future.
This isn’t convincing. Not only doesn’t it help with cases of choice between transformation, it’s hard to believe that anyone will think that comparison of one personal transformation is somehow useful in judging another of a fundamentally different type. Imagine a traumatized soldier, having learned that personal transformation decisions are bad because of their time in Iraq, deciding not to get married or have children since the subjective value of personal transformation is clearly quite low for them!
Surely, this is a deep confusion based on an induction from a sample-size of one in a clearly non-comparable case. Such reasoning wraps up almost everything we are taught to consider irrational in one nice little package.
Paul even takes a whack at extending this idea by incorporating Bayesian concepts of prior probability into the mix; an effort that just cloaks a terrible idea in a cloud of abstraction.
“If you have the right sort of evidence about the safety level of your environment (that’s a big if), the place to start when making a transformative choice might involve the generation and assessment of very general overhypotheses, such as the hypothesis that I like transformative experiences, or the hypothesis that I dislike transformative experiences. You might be able to assess these hypothesis by consulting your previous experience of particular transformative experiences (trying radically new activities, going to college, and so on) together with an assessment of the safety and the stability of your local environment to determine which hypothesis is the right one to employ. In this way, you might be able to draw from your previous experience to partly guide your transformative choices on the basis of the desirability of revelation.”
This idea is, to put it mildly, unhelpful. Paul seems to have mislaid her own main point. Since previous transformational experiences may well have changed your preferences, how is past experience with transformation a guide to future transformational experiences? As stock market advisors are fond of saying – past performance is no guarantee of future returns. And since the whole point of transformational choice is that our experience changes who we are and what we prefer, there is no way that prior transformations could be action-guiding for future choices about the value of revelation.
Paul’s concept of revelation supplements and complements ideas in reinforcement learning when it comes to low-stakes binary choices; it doesn’t help when it comes to high-stakes decisions between transformational choices. In short, Paul has given us a problem but not a solution. Revelation does little to bridge the gap between our standard model of rationality and the problem of transformational choice.
Summary
Cognitive science makes clear that people can and will change how they think, what they value, and who they are. Such change is unavoidable given how our minds work. The fact of cognitive change creates a class of decisions – transformational choices – that are not well handled by the standard model of rational decision-making. That model uses a stable preference set and a fixed set of evaluative tools to assess the likely value of the outcomes from alternative choices to pick the one with the highest expected value to us. Though the model is robust in handling all sorts of ambiguity and complications (uncertainty, ignorance, etc.), transformational choice is more radically problematic; it makes any assessment of the subjective outcomes almost impossible.
In exploring transformational choice, L.A. Paul explored a host of mechanisms for folding it into the standard model of rationality. After all, we’d be disappointed in a model of rationality that was no help in our biggest life decisions. And, in any case, we need a good way to think about these decisions. None of the mechanisms (including Paul’s favorite – revelation) work very well. Revelation is plausible in certain limited cases, but it’s hard to accept when it comes to high-stakes decisions, and it fails to give a plausible account of how a decision-maker can choose between multiple revelatory alternatives.
[1] The cases of the durian, cochlear implants and the language and core idea of epistemic newness are all from Paul’s Transformative Experience.