Do simulacra dream of digital sheep?

This is the third in a sequence of posts scrutinizing computational functionalism (CF). In a previous post, I defined a concrete claim that computational functionalists tend to make:

Theoretical CF: A simulation of a human brain on a computer, with physics perfectly simulated down to the atomic level, would cause the conscious experience of that brain.

I contrasted this with “practical CF”, the claim that a suitably low-fidelity simulation of a brain, like one that only captures functional properties, would be conscious. In the last post, I discussed practical CF. In this post, I’ll scrutinize theoretical CF.

To evaluate theoretical CF, I’m going to meet functionalists where they (usually) stand and adopt a materialist position about consciousness. That is to say that I’ll assume all details of a human’s conscious experience are ultimately encoded in the physics of their brain.

Two ways to live in a simulation

First of all, I want to pry apart two distinct meanings of “living in a simulation” that are sometimes conflated.

  1. Living in the matrix: Your brain exists in base reality, but you are hooked up to a bunch of sophisticated virtual reality hardware, such that all of the sensory signals entering your brain create a simulated world for you to live in. Consciousness lives in base reality.

  2. Living in Tron: Your brain is fully virtual. Not only are your surroundings simulated but so are all the details of your brain. Consciousness lives in the simulation.

Many intuitions about the feasibility of living in a simulation come from the matrix scenario. I’ve often heard arguments like “Look at the progress with VR—it won’t be long until we also have inputs for tactile sensations, taste etc. There is no technological barrier stopping us from being hooked up to such hardware and living in a totally simulated world”. I agree, it seems very plausible that we can live in that kind of simulation quite soon.

But this is different to the Tron scenario, which requires consciousness to be instantiated within the simulation. This is a more metaphysically contentious claim. Let’s avoid using arguments for the matrix scenario in support of the Tron scenario. Only the Tron scenario pertains to theoretical CF.

What connects simulation and target?

At its heart, theoretical CF is a claim about a metaphysical similarity between two superficially different physical processes: a human brain, and a computer simulating that brain. To find the essence of this claim, we have to understand what these two systems really have in common.

An intuitive desideratum for such a common property is that it is an intrinsic property of the two systems. One should be able to, in principle, study both systems in isolation to find this common property. So let’s try and work out what this property is. This will be easiest if I flesh out a concrete example scenario.

A concrete setup for simulating your brain

I’m going to scan your brain in the state it is right now as you’re reading this. The scan measures a quantum amplitude for every possible strength of each standard model quantum field at each point in your brain, with a resolution at ~the electroweak scale. This scan is going to serve as an initial state for my simulation.

The simulation will be run on my top-secret cluster hidden in my basement. Compute governance has not caught up with me yet. The cluster consists of a large number of GPUs, hooked up to two compute nodes (CPUs), and some memory storage.

I input the readings from your brain as a big data structure in JSON format. I have an executable compiled from a program I wrote in c called physics.exe on the first compute node. The program takes in the initial conditions and simulates the quantum fields forward in time with the GPUs. The state of the quantum fields at a series of later times is stored in memory.

I also have interpret.exe, for unpacking the computed quantum field information into something I can interpret, on the second compute node. This takes in the simulated quantum field data and shows me a video on my screen of the visual experience you are having.

Let’s carefully specify what the two physical processes we’re comparing are. The first is your brain, that’s easy enough. The second should be where “the simulation is”. Since the dynamics of the quantum fields are being simulated by the GPUs, we can consider the second physical process to be the operations on those GPUs. We want to find an intrinsic property that these two systems have in common.

In what sense am I simulating your brain?

What connects your brain and my cluster? A natural answer is that the operations of the cluster represent the physical process of your brain. It represents the brain in the sense that the operations result in data that can be fed into interpret.exe, such that it sends different data to a screen, in such a way that the screen shows us the visual experience.

But the representative nature of the GPU operations is contingent on context. One piece of context is how the output will be used. The operations represent that process insofar as interpret.exe is configured to process the simulation’s output in a certain way. What if interpret.exe was configured to take in quantum field information in a different format? Or what if I straight-up lost interpret.exe with no backup? Would the operations still represent that physical process?

If our property of “representation” is contingent on interpret.exe, then this property is not an intrinsic property of the GPU operations. In which case it’s not a good candidate shared property. It would be quite unintuitive if the experience created by the cluster is contingent on the details of some other bit of software that could be implemented arbitrarily far away in space and time.

To find the intrinsic common property, we need to strip away all the context that might colour how we make sense of the operations of the GPUs. To do this, we need an impartial third party who can study the GPU operations for us.

Is simulation an intrinsic property?

An alien from a technologically and philosophically advanced civilization comes to town. They have a deep understanding of the laws of physics and the properties of computation, completely understand consciousness, and have brought with them an array of infinitely precise measuring tools.

But the alien has total ignorance of humans and the technology we’ve built. They have no idea how our computers work. They don’t know the conventions that computers are built upon, like encoding schemes (floating-point, ASCII, endianness, …), protocols (IP, DNS, HTTP), file formats (jpeg, pdf, mp3, …), compression algorithms (zip, sha-256, …), device drivers, graphics protocols (OpenGL, RGB, …) and all the other countless arbitrarily defined abstractions.

The alien’s task

Let’s give this alien access to our GPUs, ask them to study the operations executed by them, and ask what, if any, experience is being created by these operations. If we believe the experience to be truly intrinsic to these operations, we shouldn’t need to explain any of our conventions to them. And we shouldn’t need to give the alien access to the compute nodes, interpret.exe, the monitors, or the tools we used to measure your brain in the first place.

Now let’s imagine we live in a world where theoretical CF is true. The alien knows this and knows that to deduce the conscious experience of the GPU operations, it must first deduce exactly what the GPUs are simulating. The big question is:

Could an alien deduce what the GPUs are simulating?

The alien cracks open the GPUs to study what’s going on inside. The first breakthrough is realising that the information processing is happening at the level of transistor charges. They measure the ‘logical state’ at each timestep as a binary vector, one component per transistor, 1 for charge and 0 for no charge.

Now the alien must work out what this raw data represents. Without knowledge of our conventions for encoding data, they would need to guess which of the countless possible mappings between physical states and computational abstractions correspond to meaningful operations. For instance, are these transistors storing numbers in floating-point or integer format? Is the data big-endian or little-endian?

Then there’s higher level abstractions abound with more conventions, like the format of the simulated quantum fields. There are also purely physical conventions (rather than CS conventions): frames of reference, the sign of electron charge, gauge choice, renormalization schemes, metric signatures, unit choices.

One possibility could be to look for conventions that lead to simulated worlds that obey some sensible constraints: like logical consistency or following the laws of physics. But the problem is that there could be many equally valid interpretations based on different conventions. The alien doesn’t know they’re looking for a simulation of a brain, so they could end up deciding the simulation is of a weather system or a model of galactic collisions instead.

Considering all the layers of convention and interpretation between the physics of a processor and the process it represents, it seems unlikely to me that the alien would be able to describe the simulacra. The alien is therefore unable to specify the experience being created by the cluster.

Beyond transistors: the true arbitrariness of computation

The situation might be worse than the story above. I was being generous when I imagined that the alien could work out that the action is in the transistors. Stepping back, it’s not obvious that the alien could make such an inference.

Firstly, the alien does not know in advance that this is a computer. They could instead think it’s something natural rather than designed. Secondly, the categories of computer, biology, inanimate objects etc. may not feature in the alien’s ontology. Thirdly, if the alien does work out the thing is a computer, computers on the alien’s planet could be very different.

All of these uncertainties mean the alien may instead choose to study the distribution of heat across the chips, the emitted electromagnetic fields, or any other mad combination of physical properties. In this case, the alien could end up with a completely different interpretation of computation than what we intended.

This gets to the heart of a common theme of argument against CF: computation is arbitrary. There is a cluster of thought experiments that viscerally capture this issue. Three I’ve come across are Searle’s Wall, Putnam’s Rock, and Johnson’s Popcorn. They all have the same common thread, which I’ll explain to you.

Searle’s wall, Putnam’s rock, Johnson’s popcorn

Searle famously claimed that he could interpret the wall behind him as implementing any program he could dream of, including a simulation of a brain. Combining this with theoretical CF, it sounds like the wall is having every possible (human) conscious experience.

How can Searle claim that the wall is implementing any program he wants? With the knowledge of the physical state of the wall and the computational state he wants to create, he can always define a map between physical states and computational states such that the wall represents that program. Brian Tomasik gives a stylized version of how this works:

Consider a Turing machine that uses only three non-blank tape squares. We can represent its operation with five numbers: the values of each of the three non-blank tape squares, the machine’s internal state, and an index for the position of the head. Any physical process from which we can map onto the appropriate Turing-machine states will implement the Turing machine, according to a weak notion of what “implement” means.

In particular, suppose we consider 5 gas molecules that move around over time. We consider three time slices, corresponding to three configurations of the Turing machine. At each time slice, we define the meaning of each molecule being at its specific location. For instance, if molecule #3 is at position (2.402347, 4.12384, 0.283001) in space, this “means” that the third square of the Turing machine says “0″. And likewise for all other molecule positions at each time. The following picture illustrates, with yellow lines defining the mapping from a particular physical state to its “meaning” in terms of a Turing-machine variable.

(Tomasik 2015)

Given some set of Turing machine states (like, say, a simulation of your brain), Searle can always choose some gerrymandered map like the one above, that sends the wall states to the computation.

If computation is this arbitrary, we have the flexibility to interpret any physical system, be it a wall, a rock, or a bag of popcorn, as implementing any program. And any program means any experience. All objects are experiencing everything everywhere all at once.

This is mental. To fix computational functionalism, a number of authors have put forward ways of constraining the allowed maps between physics and computation, such that only reasonable assignments are allowed. I’ve written about a couple of them in the appendix, along with why I’m not convinced by them. I think this is an unsolved problem. See Percy 2024 for the most up-to-date treatment of this issue.

This whole argument of arbitrariness hinges on the assumption I made that consciousness is an intrinsic property of a thing. Computational functionalists have the option of biting the bullet and accepting that consciousness is not intrinsic, but rather a property of our description of that system. Could that make sense?

Is phenomenal consciousness a natural kind?

A philosopher will ask me, what do I mean by reality? Am I talking about the physical world of nature, am I talking about a spiritual world, or what? And to that I have a very simple answer. When I talk about the material world, that is actually a philosophical concept. So in the same way, if I say that reality is spiritual, that’s also a philosophical concept. Reality itself is not a concept, reality is: <whacks bell and it slowly rings out> and we won’t give it a name. (Alan Watts)

The current underlying my argument has been:

  • Premise 1: Computation is not a natural kind: it is an abstraction, a concept, a map. It is fuzzy and/​or observer-dependent, down to interpretation. There is no objective fact-of-the-matter whether or not a physical system is doing a certain computation.

  • Premise 2: Phenomenal consciousness is a natural kind: There is an objective fact-of-the-matter whether a conscious experience is occurring, and what that experience is. It is not observer-dependent. It is not down to interpretation. It is an intrinsic property of a system. It is the territory rather than a map.

  • Conclusion: Consciousness cannot be computation.

So far in this post I have argued for Premise 1. I like Premise 1 and I think it’s true. But what about Premise 2? I also agree with Premise 2, but I understand that this is a philosophically contentious claim (for example illusionists or eliminative materialists will disagree with it). I consider Premise 2 my biggest crux for CF. Below I’ll explain why I think Premise 2 is true.

Why I think consciousness is a natural kind

You’re having an experience right now. You’re probably having a visual experience of seeing some text on a screen. The presence and quality of this experience is there for you to see.

Imagine two philosophers, René and Daniel, approach you and ask if they can test their competing Phenomenal Experience Detectors™ on you. René places some electrodes on your head and hooks them up to his laptop. The data is analysed, and a description of your current experience is printed on the screen. Then Daniel folds out his own setup: a handy travel fMRI hooked up to a different laptop containing different software.

René and Daniel are both computational functionalists, so their setups both interpret the readings from your brain as the execution of certain computations. But René and Daniel’s map from brain states to computational states are different. This means they come up with different predictions of the experience you’re having.

Could both of them be right? No—from your point of view, at least one of them must be wrong. There is one correct answer, the experience you are having.

But maybe you’re mistaken about your own experience? Maybe you have enough uncertainty about what you’re experiencing that both René and Daniel’s predictions are consistent with the truth. But phenomenal consciousness, by definition, is not something you can be confused about. Any confusion or fuzziness is part of the experience, not an obstruction to it. There is no appearance/​reality distinction for phenomenal consciousness.

You could be dreaming or tripping or in the matrix or whatever, so you could be wrong on the level of interpreting your experience. But phenomenal consciousness is not semantic content. It is the pre-theoretical, pre-analysed, raw experience. Take a look at this image.

Does this represent a rabbit or a duck? The answer to this question is up to interpretation. But are you having a raw experience of looking at this image? The answer to this question is not up to interpretation in the same way. You can’t be wrong about the claim “you are having a visual experience”.

While this is a confusing question, all things considered, I lean towards consciousness being an objective property of the world. And since computation is not an objective property of the world, consciousness cannot be computation.

Conclusion

I think theoretical CF, the claim that a perfect atom-level simulation of a brain would reproduce that brain’s consciousness, is sus.

Theoretical CF requires an intrinsic common property between a brain and a computer simulating that brain. But their only connection is that the computer is representing the brain, and representation is not intrinsic. An alien could not deduce the conscious experience of the computer. If consciousness is an intrinsic property, a natural kind, then it can’t be computation.

In the next post, I’ll address computational functionalism more generally, and scrutinize the most common arguments in favor of it.

Appendix: Constraining what counts as a computation

There have been a number of attempts to define a constraint on maps from physical to computational states, in order to make computation objective. I’ll discuss a couple of them here, to illustrate that this is quite a hard problem that (in my opinion) has not yet been solved.

Counterfactuals

When Searle builds his gerrymandered physics->abstraction under which his wall is executing consciousness.exe, the wall is only guaranteed to correctly execute a single execution path of consciousness.exe. consciousness.exe contains a bunch of if statements (I assume), so there are many other possible paths through the program, many lines of code that Searle’s execution didn’t touch.

Typically when we say that something ran a program, implicit in that statement is the belief that if the inputs had been different, the implementation would have correctly executed a different branch of the program.

Searle’s wall does not have this property. Since consciousness.exe requires inputs, Searle would have to define some physical process in the wall as the input. Say he defined the inputs to be encoded into the pattern of air molecules hitting a certain section of the wall at the start of the execution. He defined the physics->computation map such that the pattern of molecules that actually hit the wall represent the input required to make the execution a legitimate running of consciousness.exe. But what if a different pattern of air molecules happens to hit the wall, representing different inputs?

For the wall to be truly implementing consciousness.exe, the wall must run a different execution path of consciousness.exe triggered by the different input. But because the gerrymandered abstraction was so highly tuned to the previous run, the new motion of molecules in the wall would be mapped to the execution of a nonsense program, not consciousness.exe.

This is the spirit of one of David Chalmers’ attempts to save functionalism. For a thing to implement a conscious program, it’s not enough for it to merely transit through a sequence of states matching a particular run of the program. Instead, the system must possess a causal structure that reliably mirrors the full state-transition structure of the program, including transitions that may not occur in a specific run.

This constraint implies that counterfactuals have an effect on conscious experience. Throughout your life, your conscious experience is generated by a particular run through the program that is your mind. There are inevitably some chunks of the mind program that your brain never executes, say, how you would respond to aliens invading or Yann LeCun becoming safety-pilled. Chalmers is saying that the details of those unexecuted chunks have an effect on your conscious experience. Counterfactual branches affect the presence and nature of consciousness.

This new conception of conscious computation comes with a new set of problems (see for example “counterfactuals cannot count”). Here is a little thought experiment that makes it hard for me to go along with this fix. Imagine an experiment including two robots: Alice-bot running program p and Bob-bot running program p’. Alice-bot and Bob-bot are put in identical environments: identical rooms in which identical events happen, such that they receive identical sensory inputs throughout their lifetime.

p’, is a modification of p. To make p’, we determine exactly which execution pathway of p Alice-bot is going to execute given her sensory input. From this we can determine what sections of p will never be executed. We then delete all of the lines of code in the other branches. p’ is a pruned version of p that only contains the code that Alice-bot actually executes. This means that throughout the robot’s lifetimes, they take in identical inputs and execute identical operations. The only difference between them is that Alice-bot has a different program to Bob-bot loaded into memory.

Imagine if p is a conscious program while p’ is not. The counterfactual lines of code we deleted to make p’ were required for consciousness. Alice-bot is conscious and Bob-bot is not. But the only physical difference between Alice-bot and Bob-bot is that Alice-bot has some extra lines of code sitting in her memory, so those extra lines of code in her memory must be the thing that is giving her consciousness. Weird, right?!

Simplicity & naturalness

Another suggestion from Tomasik about how to constrain the allowed maps from Marr 3 to 2 is to only allow suitably simple or natural maps.

More contorted or data-heavy mapping schemes should have lower weight. For instance, I assume that personal computers typically map from voltage levels to 0s and 1s uniformly in every location. A mapping that gerrymanders the 0 or 1 interpretation of each voltage level individually sneaks the complexity of the algorithm into the interpretation and should be penalized accordingly.

What measure of complexity should we use? There are many possibilities, including raw intuition. Kolmogorov complexity is another common and flexible option. Maybe the complexity of the mapping from physical states to algorithms should be the length of the shortest program in some description language that, when given a complete serialised bitstring description of the physical system, outputs a corresponding serialised description of the algorithmic system, for each time step.

(Tomasik 2015)

Tomasik is imagining that we’re building a “consciousness classifier” that takes in a physical system and outputs what, if any, conscious experience it is having. The first part of the consciousness classifier translates the inputted physics and outputs a computation. Then the second part translates the computation description into a description of the conscious experience. He is saying that our consciousness classifier should have a simple physics->computation program.

But would this constraint allow human minds to exist? Consider what we learned about how the human brain implements a mind in my previous post. The mind seems to be governed by much more than what the neuron doctrine says: ATP waves, mitochondria, glial cells, etc. The simplest map from physics to computation would ignore mitochondria; it would look more like the neuron doctrine. But neuron spiking alone, as we’ve established, probably wouldn’t actually capture the human mind in full detail. This constraint would classify you as either having a less rich experience than you’re actually having, or no experience at all.

Even if the counterfactual or naturalness constraints make sense, it remains pretty unclear if they are able to constrain the allowed abstractions enough to shrink the number of possible experiences of a thing to the one true experience the thing is actually having.