The Scientific “Debate” Over Free Will

The Scientific Debate Over Free Will

Debates about free will never seem to go out of style, and two recent books addressing free will from a scientific perspective suggest that we are far from achieving any kind of consensus. And while most metaphysical questions don’t carry much practical import, there is a general sense that, when it comes to free will, how you think about it does matter. Issues like responsibility, blame, and punishment may hinge on whether or not people have free will. And if science shows that no such thing exists, it could have a profound impact on our ethics and social thinking.

Yet there’s good reason to be skeptical when science comes calling in philosophy’s house. Science is science. An understanding of the natural world does not generally underwrite either metaphysics or ethics. And while modern developments in cognitive science may have a lot to say about decision-theory and rationality, it’s considerably less likely that any science will have much to tell us about ethics or values.

Materialism and Determinism – Framing the Problem of Free Will

There is a long historical argument between those who believe that everything is just stuff (materialism) and that it isn’t (dualism, idealism, etc.). There is no settling of this debate by logic, facts or science, but these days, almost everyone is a materialist. This is more a matter of zeitgeist than anything else. The success of the scientific program does not in any way “prove” materialism, but it does lend it a certain psychological plausibility. No scientific evidence is going to disturb idealism or dualism. That means if you are one of those weird dualists who believe in things like soul or spirit, whatever arguments scientists are making about free-will isn’t going to carry much water.

Most of us, though, ARE materialists. We just assume that all there is in the world is stuff (even if it’s weird quantum stuff or forces or probability distributions). We materialists almost universally believe that the mind exists in the brain. That the brain is a physical thing. And that our decisions are driven by physical changes in the brain. Indeed, most of us can’t imagine any other position being remotely plausible. We’ve given up all the rich machinery of Plato and Augustine, the division of the soul, the role of will, the struggle between rationality and passion. That’s a good thing. Because that machinery is wildly wrong as a description of how people think and decide (that’s something that science can tell us) but it’s also something we should have recognized all along because it doesn’t fit the facts on the ground very well.

But does materialism lead to determinism and does determinism undermine ethics? To the first question, the answer is unequivocally yes. It is not that materialism leads to certainty or inevitability. Events may still be (in fact, certainly are), highly chaotic and essentially unpredictable. Nor does materialism preclude some elements of absolute randomness – either at the quantum level or in cases where randomness at finer-grained levels can trickle up to the macro world of things like neurons (it might be, for instance, that some kinds of decisions are like presidential elections – so close that tiny perturbations can change outcomes). But neither randomness nor unpredictability suggest that there is any force at work except the material stuff and forces studied by science.

If all of our decisions are made in the physical instantiation of the brain, there are two things that determine that instantiation. The first is the large set of initial factors (everything from genes to the hormonal bath surrounding the fetus) that determine the state of the brain at birth. After that, everything is and must be determined by the interaction of that initial state with environment in an iterative process (each interaction changes the brain which changes how the next set of interactions are processed).  That interaction is incredibly complex and massively chaotic. But if you are a materialist, then there’s really nothing else that could be involved.

Given this sort of determinism, many people – including many philosophers – are strongly inclined to say that free will must not exist. There is no point in your existence when you could have done anything other you did given the exact environmental situations you encountered. And if free will does not exist then many people – including some philosophers – may conclude that some of our ordinary moral intuitions about responsibility and punishment don’t apply either.

Decisions are First on the Chopping Block

It’s important to realize that taking the determinist standpoint all the way down doesn’t just undermine notions of moral responsibility, it undermines a conception of decision-making as a process wherein we might have done something different. At the most basic physical level, the thing that did occur is the only thing that could occur. It’s true that at almost every other level, it looks like there’s a chance that something else might be decided. This is as true internally as externally. But unless the process is truly random, it should be possible to fully simulate the outcome and predict with 100% accuracy the eventual decision.

As many philosophers who accept materialism have admitted, this is a difficult sell. It runs contrary to our experience of making countless decisions on an everyday basis. Nor is it a matter of our folk intuitions being misled. When we decide to order something from a menu, we are undeniably making a decision. This act requires intellectual energy and, in fact, can be clearly delineated from cases where we do not make a decision (“you order – I’m too tired”). What’s at stake isn’t the description of a decision as an agent choosing some course of action. We do this and we undeniably do it and nothing in the physical determinist description undermines that at all.

What’s at stake is whether, if a decision can be simulated with 100% accuracy (meaning it couldn’t have come out any other way), it is still a decision. It’s all about the word “choose”. If by “choose” we mean an agent consciously or unconsciously weighing alternatives, deliberating, and then selecting one, then we do choose. But if we mean by “choose” that no matter how much is known about the system and how it makes choices, the option it selects must not be predictable, then we never choose anything at all.

But giving up this second type of “choosing” may not be as hard as it seems.

Suppose we build a sophisticated neural processor that is designed to support autonomous driving. It is given an initial connection system and then it is trained on a specific set of circumstances. These include balancing a myriad of complex factors including what objects are in the field of view, the classification of these objects, their current movements, the movements of the car, the surrounding environment, the weather, etc. etc.  The unit is mounted inside a car and while driving on a rainy night is simultaneously notified by the vehicle sensors that what’s identified as a dog is running into the road directly ahead and there is a small child on the opposite sidewalk. The only achievable motion path that will not cause a huge accident endangering the lives of the passengers is to either hit the child or the dog. The unit slams on the brakes and hits the dog.

Now I assume that we can all agree that the neural unit does not have free will and that it is entirely determined. But do we think a decision was made? And do we think that the decision was the morally correct one? I think nearly everyone would say yes on both counts (and would be right to do so).

In other words, we prefer a description of decisions as choosing a course of action to decisions as being a process within which, at the deepest possible level, the agent might choose something other than what the agent chose.

Of course, we don’t ascribe moral praise or blame to the neural unit (though we do to the decision) but to the creators of it – the people who designed and trained it. That’s surely right, but what we would say if the neural unit had neither creators nor designers?

That’s really the question because for us materialists, we just are undesigned, uncreated neural units. And it seems fair to ask why the impossibility of simulation somehow creates moral value. What does anyone gain if the system isn’t predictable?

Suppose that somewhere in our brains there was a randomizer that generates the equivalent of a number between 1 and a million. It’s truly random and cannot be predicted or simulated. Now suppose that every time we make a decision, we generate a strength of conviction number. Then we ask the random number generator for a value. If we are strongly inclined to make a particular decision, then we’ll only change the decision if the random number generator pushes a very unusual number (say 999,999). But if we are torn 50-50 between two outcomes, we’ll take Outcome 1 if the number generated is less than 500,000 and Outcome 2 if the number generated is 500,000 or more.

Now we’ve created a process that cannot be replicated (provided our random number generator can’t be simulated). Most decisions we make will be entirely predictable but sometimes they won’t be. Here’s my question – why would anyone think this is better than just making a decision based on what we actually believe and value? What do we think we gain by unpredictability? And would a being with a randomizer be moral when a being without one isn’t?

The Idea of Free Will is Incomprehensible

Many people start with a presumption that our ethical judgements are deeply bound up with freedom of will. Yet it’s clear from the neural unit example that we are willing to ascribe ethical value to decisions even when the decision-maker has no freedom in any sense of the word. Nor is there any evidence that a society must have a concept of free will or will to have ethical notions.

What’s more, this notion of freedom is a slippery one. The idea of will was introduced into Western thinking because Christian scholars needed a way to explain why a rational agent (humans) so often acted irrationally. They borrowed Plato’s idea of a tripartite soul and co-opted spirit into the role of will. I take it that none of us feel the need to postulate a soul, much less a tripartite one, and since we do not believe that humans were created as preeminently rational beings, we don’t have much to explain when it comes to our continuing irrationality. As I’ve said before, humans are not rational animals. We are animals that must work to be rational.

We do not always do the work.

But tying notions of responsibility to freedom of will was always a tricky endeavor. Theologians had to deal with the obvious objection that if God created us then why isn’t God responsible for the bad (and good) we do? This slippery “will” thing was the attempt at an answer. But it was never very clear what “will” could be or how it could work. Did this non-physical thing just override what the brain was going to do at the critical moment? Did it change the brain? How exactly did our “will” make decisions?

This last question is particularly important. We see that decision-making isn’t random. People’s genetics and experiences do (at least) influence how they behave and who they seem to be. No one can deny this. Yet will cannot be fully shaped by these things or else we’re right back in the determinist boat.

Perhaps our will is influenced by our physical experience but not determined by it? Or perhaps our physical experience impacts our brain in ways that make the will’s job easier or harder? Either way, our decisions will be underdetermined. But if we like this sort of explanation, then it’s going to be very hard to sustain the role of free will in responsibility that we wanted. If someone’s bad genetics or experiences made them 70% likely to act badly, are they still blamable if they do? What if that percentage is 90%? Or 99% Or 99.9%? Where does responsibility tail off and how can we possibly guess at what those numbers look like for any given person? And, of course, this works both ways. If someone had a very good upbringing or genes, perhaps there is a 99% chance that they will act well – in which case they surely don’t deserve much praise!

A notion of freedom that suggests that experience and genetics underdetermine decisions puts us in a strange landscape indeed – one where we judge people not by who they are or how they act, but by our interpretation of how well they act versus their genetic and experiential inheritance. Oddly, in this world, the more help, good parenting and good experience we give someone, the harder it is for them to be judged “good”.

Assuming a radical freedom of the will is even worse. If our will isn’t influenced or limited by experience, then we have literally no influence on each other at an ethical level. Whatever kindness you offer, whatever assistance you provide, whatever parenting you give – it can’t make any moral difference to anyone else. The books you read can’t change who you are. Teachers can’t make you a better person. Role models can’t help you. Good culture is meaningless. Because if any of these things influence the will, we are right back in the probabilistic or deterministic world.

Radical free will upends nearly every ethical idea we have. It makes solipsists of us all, mocking our shared ethical life and value to each other.

In short, it’s quite hard to see how will, if it existed, could be free. And if it was free, it would be disastrous to many notions that are foundational not just to ethics but to society.

This Doesn’t Mean We Can’t Have Values and Make Decisions

So, is the materialist doomed to a world devoid of value and judgement? Not at all. Countless cultures exist that don’t posit some uncaused decision engine that pulls the moral strings yet somehow manage to have both ethical values and judgements of responsibility. Nor should we think that deep determinism undermines our working ethical notions. The status of deep determinism says nothing about what we value. If we value kindness, then we can continue to praise people who exhibit kindness and blame people who don’t. Nor, in making that assessment, need we make any claim about the degree to which they are responsible for those characteristics. In fact, as ought to be obvious from the discussion of probabilities above, our day-to-day ascriptions of praise or blame are typically assessments of current state, not the experiential and genetic path to that state.

It’s true that some very specific notions of morality require an uninfluenced decider as part of their attribution of responsibility. That was never a good thing, and those theories aren’t persuasive on other grounds. Modern virtue theories are an excellent example of a far more compelling way to think about morality. Virtue theories provide an ethical framework that doesn’t require us to defy materialism, posit some uninfluenced decider, or deny the importance of education and experience. In fact, virtue theories have always embraced the role of education and experience in shaping character, so they fit quite naturally alongside modern cognitive science.

The type of freedom most of our foundational notions of responsibility require is an absence of constraint, not an indeterminate free will. And nothing about deep determinism forces us to abandon the idea of freedom as an absence of constraint.

Indeed, cognitive science, by making clear that we are adaptive learners (who we are changes with experience), has opened a pathway for us to choose not just what we do but who we are and what we value. That’s a kind of freedom worth having even in a fully determined world. Like every other choice, these decisions are deterministic at the deepest level. And, like every other choice, that says nothing about the effort we should make to think about them, the influence for good we may have on others, and the value of thinking more clearly about the choices we have.

Determinism or no, you still get exactly what you put into a decision. The more work you put into it, the better it will be.

Why Science has Nothing to Say about Notions of Praise/Blame/Responsibility

Nothing of interest follows from a scientific determination that we do not have free will. In fact, it’s hard to understand what a scientific description of free will could look like. Nor do notions of praise, blame, or responsibility rest on an abstract conception of undetermined choice. If we want to say that a moral claim can only be made if such freedom underlies it, then all we are really saying is that no moral claims can be made. Yet this presumes far too much. Good moral systems exist that are perfectly compatible with materialism and the truth of deep determinism. Indeed, almost all of our existing folk notions of praise, blame and responsibility are compatible with deep determinism.

A scientist is free (insofar as any of us are “free”) to argue that because people are shaped by their environment, we should change our justice system to reform not punish offenders. But this is not a scientific conclusion, it is an ethical conclusion for which there is no scientific justification (because no such justification is possible). We might just as easily argue that because people are shaped by their environment and experience, we should punish them (Plato argued that a man should welcome punishment for misdeeds). Or we may argue that we should punish people because their acts are bad. And that, too, is a perfectly reasonable ethical argument that need not be supported by some superstructure of uncaused or underdetermined action.

It’s easy to confuse notions of constraint, notions of uncaused freedom, and deep determinism. But they are fundamentally different things. Deep determinism does not replace or eliminate the distinction between constrained or unconstrained decisions. When someone puts a gun to your head, you lack freedom to choose. But this lack is utterly different than whatever it is you lack from deep determinism, and ethical theories that recognize constraints as validly excusing need make no similar concessions to deep determinism.

In short, science isn’t going to solve problems about punishment and responsibility because those problems require judgements of value not fact, and the questions of value remain regardless of the status of free will. Cognitive science may eventually help us understand the most effective strategies of reinforcement learning to change a criminal’s behavior. But science cannot tell us whether we should use those strategies. We may value a society in which people are punished. We may even value the personal deterrent that punishment provides. We are, in fact, free to have any opinion about punishment we choose.


Leave a Reply

Your email address will not be published. Required fields are marked *