Transformational experiences and self-altering decisions exist because of the way we think. Our minds change. They change in the shallow sense of changing what we remember, the facts we know or the beliefs we hold…but they also change in a much deeper sense. Experience changes, in fundamental ways, how we think and who we are. To understand why this is and must be so requires understanding the basics of modern cognitive and neural science.
Brains are a miracle. Not just human brains. Human brains do amazing things of course. But one of the lessons of modern computer science is how hard the everyday tasks that most animal brains can accomplish truly are. Computers that can process billions of math problems in the time it takes us to add two numbers struggle with problems around perception, categorization, and movement. The same computer that solves math problems that would take a mathematician years, or analyzes millions of tax returns in a day, may grind to a halt trying to move a machine-controlled lathe in a complex pattern or drive a car on a busy street. Problems we think simple are often quite hard – it’s just that the architecture of our brain is optimized for solving them. On the other hand, many of the problems we find hard or even impossible are solvable by very efficient brute force calculation. Computers run at much faster clock-speeds than our brains. That means that any problem amenable to a brute force solution is one that computers will excel at. The human brain is exceptional for its architecture not its speed.
The problems we find hard or easy turn out to be important clues to how our brains work. But those clues didn’t mean much until computer and cognitive scientists began to unravel the architecture of the brain and translate that architecture into digital form for experimentation. The way people (and other animals) think turns out to be completely unexpected and unintuitive. No amount of reflection about how we think could ever have stumbled on the truth – nor was any pre-scientific model remotely close to the truth. But while neuroscience is fascinating and worth deep study, you don’t need to be a neuroscientist or AI researcher to understand the essential architecture of the brain and how that architecture impacts decision-making.
While the basic facts of human cognition are completely unexpected, nothing they tell us about how people learn or change is unavailable to a careful observer of human behavior. This is the not so unusual case where the most egregious mistakes about how we think involved conforming observation to theory. The cleverer the theorist, the more mistaken their ideas of cognition tended to be. But it isn’t simply a matter of people falling in love with theory – the unusual aspect to cognition is how opaque and deceptive it is from the INSIDE. Unlike many areas where observation and theory part company, the main driver of misperceptions about cognition isn’t observation of other people, but introspection by the thinker. We have no access at all to the mechanisms of our own thought and attempts to introspect to those mechanisms proved to be universally misleading.
Understanding how brains work may illuminate important challenges in human decision-making; however, like any scientific discipline, neuroscience can be easily misunderstood or misused. There are no neuroscientific imperatives that prove one way of thinking is better than another, nor could there ever be such imperatives. Science will not give us answers about what we should decide or what kind of life is best, but it can provide clues about what’s involved in cognitive change. Keep in mind, too, that neuroscience a not a fully mature discipline. There is a huge amount about human cognition – especially higher functions – that is poorly understood or unexplained by current theory. Because of that immaturity, if neuroscience findings conflicted with everyday behavioral observation, it would be genuine cause for both doubt and concern about the science in a way that isn’t true, for example, of quantum mechanics.
The truth, though, is that the implications of the neuroscientific view of the brain make perfect sense to an objective viewer of human behavior and, far from contradicting everyday observations of actual decision-making, cast new and revealing light on human styles of thinking and decision-making that were very hard to understand with any traditional folk model of thought. Those folk models tended toward idealized theories that glossed over the shortcomings in observed thinking behaviors and so rarely explained the non-obvious. Cognitive scientists, on the other hand, have created a model that sheds real light on how we think.
This matters. The errors in models of thought prior to modern neuroscience encouraged numerous misconceptions about decision-making including a comprehensive failure to recognize the existence of transformative experience and self-altering decisions. The basic architecture of human cognition leads directly to the problem of transformational experience. It does more. It illuminates many of the key limitations we face in making self-altering decisions and helps explain why some pathways to self-change are more successful than others.
The architecture of thought gives us insight into how we learn, what drives cognitive development and change, what constitutes identity, what grounds we have for trusting (and doubting) our thinking processes, and why certain aspects of transformational choice are easy and others almost impossible. For a preference optimizer, none of these things matter very much. Preference optimization decisions don’t involve personal change, needn’t be concerned about the origin or justification of values, and – except in a purely practical sense – do not have to account for challenges in thinking or communication. But the possibility of cognitive change driven by transformative experience makes a profound difference. A decision-maker thinking about transformational choice had better understand how minds change or their decisions aren’t likely to be very effective.
Cognition
Starting at the most basic and obvious level, we think with our brains. It’s too much to say that we are our brains – since we are our bodies too and the brain is a part of, and embedded within, a body. But that the brain and its attendant sensory apparatus is the engine, repository and substance of consciousness, thought and decision-making is non-controversial.
Most of us today tend to think of the brain as like a (very) personal computer. There is a CPU that does the thinking, a bunch of RAM for short-term memory, an SSD (that turns into a thumb drive as we age) for long-term memory, an Operating System that manages all the input/output from 5 sensory devices and a nice suite of programs for natural language processing, math, logical reasoning, and a bunch of other stuff.
While this view may not be as naïve or misleading as traditional folk accounts of the mind, it’s fundamentally wrong. Though a digital computer can mimic the workings of our minds (a fast enough digital computer can mimic any process, cognitive or otherwise), our minds do not work like digital computers. Our minds are not engineered in any traditional sense of the word. There is no clear and distinct separation of systems. Components are not purpose built to support specific functions. There is no algorithmic code, carefully crafted by a team of programmers, to handle specific tasks. Brains are not digital processors at any level.
It’s all fundamentally different.
Not all of this matters at the level we usually think at. Wetware is nothing like hardware, but that doesn’t really matter to a decision-maker. It’s like the difference between using Word on a Mac or a PC. The underlying hardware is very different but using the program is almost identical. On the other hand, differences in the way the systems work to solve problems can be profound and important.
The computer software you use is written in programming languages that resemble a kind of Pidgin English. These languages are formal and precise and typically combine a simple vocabulary with a fixed grammar and a fair amount of mathematical notation. The programming language C# has 79 keywords. A program like Microsoft Excel might have 10-20 million lines of programming code (think of them as sentences) written in this Pidgin vocabulary of C#. The code for programs like Microsoft Excel doesn’t run directly on the computer. It’s compiled by another program into a lower-level language which is then translated into a set of processor specific instructions. These really, really low-level instructions are what drive actual changes in the contents of the various hardware components of the computer that “do” things.
There are roughly 1,500 of these low-level instructions in the Intel Processor command set. It’s a tiny vocabulary and most of it is highly specific and limited. Digital computers achieve an immense amount working on top of a very simple architecture. They do it by making that simple architecture insanely fast.
Almost all computer programs in everyday use (until the recent advent of programs like ChatGPT and MidJourney) are “algorithmic” in nature. They are designed by programmers to do specific things in a specific order and the code, once written and published, is never changed. Programmer’s often write the initial version of a program in a kind of pseudo-English designed to capture the logical structure of the process. Here, for example, is a pseudo-code description of a spell checker that in normal English would be described like this: check every word in the document against a list of correct spellings and flag ones that aren’t in the list:
Pseudo-Code
Start at Top of Document
Do This
Read the next word in document
Lookup Word in Dictionary
If Word is Present in Dictionary go back to Do This
If Word is Not Present
Add a redline under the word
If no more words
Finished so Exit this block of code
Go Back to Do This and repeat
These aren’t commands a computer could execute, but it’s relatively easy for a programmer to translate this into a real programming language. Put 10-20 million of these lines of code together and you have a program like Word that can support almost any conceivable word-processing task.
So if it takes 10-20 million lines of code to create Word or Excel, how many lines of code are in our brains?
None.
Because our brains don’t work this way at all. There is no CPU in the brain. No compiler. No tiny little instruction set and no static algorithmic programs. Evolution doesn’t write code.
Instead, the architecture of the brain is built around something called a neural network or, more generically, a connection system. The biological structures of the brain – neurons and synapses – form a complex and inter-connected array of connection systems.[1]
Connection systems are one of those artifacts that seem, in their function, to be almost magical. It’s hard to believe that they can do ANYTHING at all. It’s much easier for us to understand how a huge, complex program with hundreds of thousands of lines of programming code can do something useful (like provide word-processing functionality). Intuitively, the way most computer programs work makes sense to us. It feels like you could “think” that way. A non-programmer might not understand the exact code behind a program like Word, but if someone walked them through the code by function, they could understand it just as the pseudo-code for doing a spell-check above should make sense.
Connection systems aren’t like that. Connection systems don’t use commands or sentences or logic or algorithmic processes. Even more incredible, connection systems aren’t designed. To build a complex computer program, a team of programmers must architect an extraordinarily complex structure and then craft every single line of code inside that structure. It’s a huge and precise undertaking and – as you’ve probably noticed – prone to error. The more code you have, the harder it is to change. Changing one thing often breaks something else. These problems in a huge codebase would be significant drawbacks for an organism that needs to adapt to complex and changing environments. When you write a program, it must be right from the get-go, it doesn’t learn, and it doesn’t change itself.
It’s unlikely that a random process like natural selection could ever produce good working code. But even if it was possible, there wouldn’t be much point. Because algorithmic code can’t efficiently solve most problems fundamental to living: things like moving a body in complex and precise ways, processing detailed visual images of the surroundings to isolate key components (like threats), or talking in a spoken language. No one has EVER been able to write algorithmic code that was effective at any of these tasks, though they have been able to build connection systems that were quite good at each.
A connection system replaces design with experience. Connection networks start as a very basic and somewhat generic structure. They have a series of interconnected layers. Each layer contains a bunch of nodes (or neurons). Each node in a layer is connected to all or many of the nodes in the next layer.
When a neural network gets a signal (the first layer is given a set of values), each neuron reacts to that signal (at first rather randomly) and passes the result to all the other neurons it’s connected to. However, the connection between nodes can be strong or weak. A lot of the signal may be passed on or only a little. This weighting of the connections between nodes is the part that does the real work.
In their initial state, the weights in a connection system aren’t set to any meaningful values, so when it starts, the output from a connection system is essentially random. Nobody has to pre-tune anything. No conscious design or configuration is necessary. The first time you drop an input into a connection system, it’s like dropping a coin into one of those cascading boxes – it could end up anywhere. Unlike a computer program, which can do its stuff as soon as it’s written and will never get either better or worse, a connection system isn’t useful until it’s been trained. Training means giving the connection system lots and lots of data and – this is critical – correct answers. Each item of data given to a connection system must come with the right answer attached.
To build a connection network that can decide if a photograph contains an image of a dog, the network is trained by giving it tens of thousands of pictures some of which contain a dog and some of which don’t. Along with each picture is a label that gives the right answer: dog or no dog.
When an untrained connection system gets the first photograph, it’s no better at finding dogs than it is at finding umbrellas. It will give an answer, but it’s like a Twitter answer – random and meaningless. Then the magic kicks in. If the network guesses wrong (which it mostly will since it’s just guessing), it has a process for going back over all the elements in the network and tweaking their connection weights so that they would be more likely, given that particular input, to produce the right answer next time. This “tweaking” algorithm is the only “hard-coded” part of a connection system.
Everything else is learned.
If you keep giving the connection system pictures, that continuous tweaking eventually makes it pretty good at identifying pictures that have a dog. If the architecture of the system is robust and you give it a LOT of data, it can become as good or better at identifying which pictures have a dog as we are. What’s happened in the training process is that the connections in the network have created a system that is tuned to whatever variations in the original input tend to be significant for getting the right answer.
What’s amazing about this is that no one tells the system how to recognize a dog. There are no rules (look for four legs or a muzzle or fur) and no explanations. Nothing but data (images in this case) and answers (a label that says “dog” or “no dog”). Not only does no one tell the system how to recognize a dog, when the system is trained, no one really knows how it works.
This network of weighted connections allows a connection system to model incredibly complicated systems without doing any kind of math, statistics, or algorithmic processing. That’s how connection systems work and believe it or not, that’s how you think. That’s why you (and your brain) can solve immensely complex problems in practical physics (like how to move toward the proper landing spot of a baseball in flight) even when you can’t do a lick of math. Your subconscious isn’t any better at math than you are. There isn’t a little Newton in the basement of every baseball player’s brain. Your brain isn’t doing math when it models problems in practical physics, it’s using a well-trained connection system.
Algorithmic processes are designed. Connection systems are trained. Our brains are connection systems because creating a learning system via evolution is easier, more powerful, and more flexible than hard-coding a set of capabilities.
If the way connection systems work seems obscure or impossible, don’t fret. If you aren’t already a student of neural networks and machine learning, there’s zero chance that based on this explanation you can picture how they work. I’ve used neural networks extensively for data science tasks. I’ve read countless explanations of how they work. They are all (as I’m sure mine is above) as clear as mud. Connection systems don’t work like anything else and the fact that a system of nodes and connections can be used to solve complex problems is, to say the least, surprising.
As a decision-maker, you don’t really need to understand that much about how connection systems function (and there’s a lot more to the story even about computer connection systems much less our brains), but it is important to understand some of the fundamental properties of a connection system.
And the first and most fundamental property of any connection system is that when it comes to what it does and how well it does it, the training it has is everything. The exact same connection system structure can be used to find dogs in a picture or cats in picture or identify letters and numbers. But for it to do ANY of these things, it must be trained. And how well it will do these things is more a function of the training it has had than the unique properties of the system itself.
Experiences, not algorithms, are the stuff that shapes our mind.
Connection Systems Create Transformative Experience
Your brain is an elaborate set of connection systems that must learn how to do almost everything. Human babies are about as unprogrammed and helpless as it’s possible to be. A baby must learn how to use its body. It must learn how to use its sensory devices. It must learn how to recognize people and objects and sound. Out of the box, human babies can do very little and know almost nothing.
The really important point, though, is that when connection systems learn they don’t take in facts and store them on a wet drive. For a connection system to “learn”, its internal structure must change. A “trained” neural network is fundamentally different than its untrained version, and the same is true of your brain. It changes as you learn. That’s not the way computer programs work.
Imagine two computers, both running Excel. In one, you type the number 10 into the first cell, the number 20 into the second cell, and the formula “=A1+A2” into the third cell. The third cell will now show the value of 30. In the second machine, you type 25 into the first cell and 25 into the second cell and the formula “=A1+A2” into the third cell. It will show the value 50 in the third cell.
Nothing you’ve done has ever (or will ever) change the way Excel or the computer works. All you’ve done is enter data. Each spreadsheet displays three different numbers, and those numbers are saved in the computer’s memory. For traditional computer programs, there is an absolute divide between code and data, between algorithm and input.
With a connection system though, each input changes the system, causing the connections between neurons to strengthen or decrease. The elements of the connection system are slightly different each time it gets new data, because the whole point of a connection system is that it adjusts its elements to give better answers based on the data it has been given.
Go back to Excel and change the value of the first cell on the original computer to 25. Now do the same to the second cell. The third cell in the first computer will now be identical to the third cell in the second computer – both will show a value of 50.
Now suppose you start with two untrained and structurally identical connection systems. You give one connection system two pictures of German Shepherds and the second connection system a picture of a Pekinese and a Labradoodle. They are both learning to recognize “dogs” but they’ve learned about somewhat different kinds of dogs. Now give the second connection system the two pictures of German Shepherds given the first system. When we did this kind of thing in Excel, the final result was identical – both versions of Excel now had 50 as an answer. In a connection system, the second connection system will be quite different than the first even after it was given the two German Shepherd pictures. Why? Because the second connection system had learned something about recognizing dogs from the first two pictures. It adds the German Shepherd pictures to what it already knows, but it ends up in a different state than the first connection system that has been given only pictures of German Shepherds.
Remember, a connection system is given an input and a correct answer. Each time this happens, it adjusts the way each element in the connection system responds so that the next time it gets that input it is more likely to give the right answer. When it comes to the brain, we call the process of getting data “experience” and the tuning of the connection systems in our brain, “learning.”
For connection systems, whether in our brain or on a computer, data and thinking are not separate. Data is still data. But now the data is changing the internals of the thinking system. Experience changes our brain. Experience must change our brain.
Start with two identical but untrained connection systems. Train them with different data. Then give them a new, identical piece of data (like a picture of a dog). They will produce different outputs even though they started out as identical and the only thing that’s different is the data they were given. Go back to identical, untrained systems and give them identical training data but arranged in a different order. They will still produce different outputs because connection systems are sensitive to the order of learning. Take a data point that has been previously processed by the network and give it to the SAME network again after more training and it will respond to it differently. Change the classification of that data point (change the “fact” about that data – from “dog” to “no dog” for example) and the connection system will adjust but not be identical with its state if it had never been given the original (subsequently corrected) first fact.
None of these statements are true for an algorithmic process. None of them even make sense for an algorithmic process. All of them are true for a connection system.
This change in response to experience is called, in the brain, neural plasticity. It’s a term that gets a lot of play in popular media and it’s an easy concept to misunderstand because it comes in two forms. If you’ve paid any attention to the big news in neuroscience in the past 10-15 years you’ve probably heard about the ability of the brain to change itself. It turns out that at least in some cases, the brain has some ability to re-purpose existing structures to new functions. This kind of plasticity (I’ll call it structural plasticity) was quite unexpected by neuroscientists.
Structural plasticity is fascinating and it may be medically important. However, it has almost nothing to do with the kind of plasticity that is involved in learning and the normal operation of the brain. What’s more, there are many structures of the brain that are not re-purposable and there are surprising ways in which brain structure can limit cognitive function.
People learn language seamlessly and easily at a young age. But if an infant is unexposed to language for too long, the parts of the brain designed to manage language are used elsewhere and are no longer available for that function. A child deprived of language intercourse at the appropriate age will never develop native language speaking capabilities. Similarly, there are connection systems in the brain that get wired to process the input from each eye. Cover the eye of a young child with a patch for an extended period, and the entire network gets wired to a single eye. Once this has happened, the child will never use the other eye even when the patch is removed.
In some cases, structural plasticity decreases with age. For sufferers of severe epilepsy, a corpus callosotomy severs the link between the brain hemispheres. For younger patients, the brain will often develop alternative communication structures between the two hemispheres in the months following the surgery. Older patients aren’t so lucky.
Structural plasticity is cool but when it comes to normal life, it isn’t important. A creature unable to take advantage of structural plasticity would be almost indistinguishable from us. Most of the time, we don’t need to fundamentally re-wire the structures of the brain to change our behavior or learn something. Unless our brain is damaged, we just need to use the connection systems we already have.
At the everyday level of learning and experience, plasticity is simply a fact because connectionist systems are inherently plastic. They work – and they can only work – by changing their state based on experience. Without this deep plasticity, we couldn’t learn a new word or a new text abbreviation (LOL). We couldn’t learn to play the piano. We couldn’t learn a new computer program. We couldn’t learn how to operate a new TV remote or play a new video game. We couldn’t learn how to drive to a new location. At the most basic level, we are adaptive learners – learning machines whose one great talent is the ability to change. Why does it matter? Cognitive science tells us that we have deep plasticity about almost every aspect of our thought. All experience is transformative. Big, transformative experiences will fundamentally re-wire our brain. And re-wiring the brain changes who we are at a basic level. Nor just memory. Experience can change our attitudes, beliefs, dispositions, values and even the way we think. Paul’s theory of transformative experience suggests that in cases of large-scale transformation, our standard theory of rationality doesn’t hold because we have no way to assess the value of any outcomes after we’ve been changed. Cognitive science can’t prove that theory. But it does prove that transformative experiences will and must happen. The foundational problem of decision-making beyond preference optimization – the need to select between experiences that will change our who we are – is a direct result of our cognitive architecture.
[1] It is ironic that although computer scientists modeled software neural networks on their understanding of brain function, it turns out that individual neurons in the brain are considerably more complicated than single nodes in a software connection system. You need a moderately complex connection system to model even a single neuron in the brain.