This is intended to be the first in a sequence of posts where I scrutinize the claims of computational functionalism (CF). I used to subscribe to it, but after more reading, I’m pretty confused about whether or not it’s true. All things considered, I would tentatively bet that computational functionalism is wrong. Wrong in the same way Newtonian mechanics is wrong: a very useful framework for making sense of consciousness, but not the end of the story.
Roughly speaking, CF claims that computation is the essence of phenomenal consciousness. A thing is conscious iff it is implementing a particular kind of program, and its experience is fully encoded in that program. A famous corollary of CF is substrate independence: since many different substrates (e.g. a computer or a brain) can run the same program, different substrates can create the same conscious experience.
CF is quite abstract, but we can cash it out to concrete claims about the world. I noticed two distinct flavors[1] of functionalism-y beliefs that are useful to disentangle. Here are two exemplar claims corresponding to the two flavors:
Theoretical CF: A simulation of a human brain on a computer, with physics perfectly simulated down to the atomic level, would cause the conscious experience of that brain.
Practical CF: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality,[2] would cause the conscious experience of that brain.
In this sequence, I’ll address these two claims individually, and then use the insights from these discussions to assess the more abstract overarching belief of CF.
How are these different?
A perfect atomic-level brain stimulation is too expensive to run on a classical computer on Earth at the same speed as real life (even in principle).
The human brain contains 1026atoms.The complexity of an N-body quantum simulation precisely on a classical computer is O(2N).[3] Such a simulation would cost 21026 operations per timestep. Conservatively assume the simulation needs a temporal precision of 1 second, then we need 21026 FLOPS. A single timestep needs more operations than there are atoms in the observable universe (~1080), so a classical computer the size of the observable universe that can devote an operation per atom per second would still be too slow.
Putting in-principle possibility aside, an atom-level simulation may be astronomically more expensive than what is needed for many useful outputs. Predicting behavior or reproducing cognitive capabilities likely can be achieved with a much more coarse-grained description of the brain, so agents who simulate for these reasons will run simulations relevant to practical CF rather than theoretical CF.
Practical CF is more relevant to what we care about
In my view, there are three main questions for which CF is a crux: AI consciousness, mind uploading, and the simulation hypothesis. I think these questions mostly hinge on practical CF rather than theoretical CF. So when it comes to action-guiding, I’m more interested in the validity of practical CF than theoretical CF.
AI consciousness: For near-future AI systems to be conscious, it must be possible for consciousness to be created by programs simple enough to be running on classical Earth-bound clusters. If practical CF is true, that demonstrates that we can create consciousness with simple programs, so the simple program of AI might also create consciousness.
If theoretical CF is true, that doesn’t tell us if near-future AI consciousness is possible. AI systems (probably) won’t include simulations of biophysics any time soon, so theoretical CF does not apply to these systems.
Mind uploading: We hope one day to make a suitably precise scan of your brain and use that scan as the initial conditions of some simulation of your brain at some coarse-grained level of abstraction. If we hope for that uploaded mind to create a conscious experience, we need practical CF to be true.
If we only know theoretical CF to be true, then a program might need to simulate biophysics to recreate your consciousness. This would make it impractical to create a conscious mind upload on Earth.
The simulation hypothesis: Advanced civilizations might run simulations that include human brains. The fidelity of the simulation depends on both the available compute and what they want to learn. They might have access to enough compute to run atom-level simulations.
But would they have the incentive to include atoms? If they’re interested in high-level takeaways like human behavior, sociology, or culture, they probably don’t need atoms. They’ll run the coarsest-grained simulation possible while still capturing the dynamics they’re interested in.
Practical CF is closer to the spirit of functionalism
The original vision of functionalism was that there exists some useful level of abstraction of the mind below behavior but above biology, that explains consciousness. Practical CF requires such a level of abstraction so is closely related. Theoretical CF is a departure from this, since it concedes that consciousness requires the dynamics of biology to be present (in a sense).
The arguments in favor of CF are mostly in support of practical CF. For example, Chalmer’s fading qualia experiment only works in a practical CF setting. When replacing the neurons with silicon chips, theoretical CF alone would mean that each chip would have to simulate all of the molecules in the neuron, which would be intractable if we hope to fit the chip in the brain.[4]
CF is often supported by observing AI progress. We are more and more able to recreate the functions of the human mind on computers. So maybe we will be able to recreate consciousness on digital computers too? This is arguing that realistic classical computers will be able to instantiate consciousness, the practical CF claim. To say something about theoretical CF, we’d instead need to appeal to progress in techniques to run efficient simulations of many-body quantum systems or quantum fields.
CF is also sometimes supported by thesuccess of the computational view of cognition. It has proven useful to model the brain as hardware that runs the software of the mind, via e.g. neuron spiking. The mind is a program simple enough to be encoded in neuron spiking (possibly plus some extra details e.g. glial cells). Such a suitably simple abstraction of the brain can then run on a computer to create consciousness—the practical CF claim.
So on the whole, I’m more interested in scrutinizing practical CF than theoretical CF. In the next post, I’ll scrutinize practical CF.
There could be a number of ways around this. We could use quantum monte carlo or density functional theory instead, both with complexity O(N^3), meaning a simulation would need 10^75 operations per timestep, once again roughly the size of the observable universe. We could also use quantum computers—reducing the complexity to possibly O(N), but this would be a departure from the Practical CF claim. Such a simulation on Earth with quantum computers is in principle possible from a glance, but there could easily be engineering roadblocks that make it impossible in practice.
Could the chips instead interface with, say, a Dyson sphere? The speed of light would get in the way there, since it would take ~minutes to send & receive messages, while neuron firing details are important at << seconds.
Two flavors of computational functionalism
This is intended to be the first in a sequence of posts where I scrutinize the claims of computational functionalism (CF). I used to subscribe to it, but after more reading, I’m pretty confused about whether or not it’s true. All things considered, I would tentatively bet that computational functionalism is wrong. Wrong in the same way Newtonian mechanics is wrong: a very useful framework for making sense of consciousness, but not the end of the story.
Roughly speaking, CF claims that computation is the essence of phenomenal consciousness. A thing is conscious iff it is implementing a particular kind of program, and its experience is fully encoded in that program. A famous corollary of CF is substrate independence: since many different substrates (e.g. a computer or a brain) can run the same program, different substrates can create the same conscious experience.
CF is quite abstract, but we can cash it out to concrete claims about the world. I noticed two distinct flavors[1] of functionalism-y beliefs that are useful to disentangle. Here are two exemplar claims corresponding to the two flavors:
Theoretical CF: A simulation of a human brain on a computer, with physics perfectly simulated down to the atomic level, would cause the conscious experience of that brain.
Practical CF: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality,[2] would cause the conscious experience of that brain.
In this sequence, I’ll address these two claims individually, and then use the insights from these discussions to assess the more abstract overarching belief of CF.
How are these different?
A perfect atomic-level brain stimulation is too expensive to run on a classical computer on Earth at the same speed as real life (even in principle).
The human brain contains 1026atoms.The complexity of an N-body quantum simulation precisely on a classical computer is O(2N).[3] Such a simulation would cost 21026 operations per timestep. Conservatively assume the simulation needs a temporal precision of 1 second, then we need 21026 FLOPS. A single timestep needs more operations than there are atoms in the observable universe (~1080), so a classical computer the size of the observable universe that can devote an operation per atom per second would still be too slow.
Putting in-principle possibility aside, an atom-level simulation may be astronomically more expensive than what is needed for many useful outputs. Predicting behavior or reproducing cognitive capabilities likely can be achieved with a much more coarse-grained description of the brain, so agents who simulate for these reasons will run simulations relevant to practical CF rather than theoretical CF.
Practical CF is more relevant to what we care about
In my view, there are three main questions for which CF is a crux: AI consciousness, mind uploading, and the simulation hypothesis. I think these questions mostly hinge on practical CF rather than theoretical CF. So when it comes to action-guiding, I’m more interested in the validity of practical CF than theoretical CF.
AI consciousness: For near-future AI systems to be conscious, it must be possible for consciousness to be created by programs simple enough to be running on classical Earth-bound clusters. If practical CF is true, that demonstrates that we can create consciousness with simple programs, so the simple program of AI might also create consciousness.
If theoretical CF is true, that doesn’t tell us if near-future AI consciousness is possible. AI systems (probably) won’t include simulations of biophysics any time soon, so theoretical CF does not apply to these systems.
Mind uploading: We hope one day to make a suitably precise scan of your brain and use that scan as the initial conditions of some simulation of your brain at some coarse-grained level of abstraction. If we hope for that uploaded mind to create a conscious experience, we need practical CF to be true.
If we only know theoretical CF to be true, then a program might need to simulate biophysics to recreate your consciousness. This would make it impractical to create a conscious mind upload on Earth.
The simulation hypothesis: Advanced civilizations might run simulations that include human brains. The fidelity of the simulation depends on both the available compute and what they want to learn. They might have access to enough compute to run atom-level simulations.
But would they have the incentive to include atoms? If they’re interested in high-level takeaways like human behavior, sociology, or culture, they probably don’t need atoms. They’ll run the coarsest-grained simulation possible while still capturing the dynamics they’re interested in.
Practical CF is closer to the spirit of functionalism
The original vision of functionalism was that there exists some useful level of abstraction of the mind below behavior but above biology, that explains consciousness. Practical CF requires such a level of abstraction so is closely related. Theoretical CF is a departure from this, since it concedes that consciousness requires the dynamics of biology to be present (in a sense).
The arguments in favor of CF are mostly in support of practical CF. For example, Chalmer’s fading qualia experiment only works in a practical CF setting. When replacing the neurons with silicon chips, theoretical CF alone would mean that each chip would have to simulate all of the molecules in the neuron, which would be intractable if we hope to fit the chip in the brain.[4]
CF is often supported by observing AI progress. We are more and more able to recreate the functions of the human mind on computers. So maybe we will be able to recreate consciousness on digital computers too? This is arguing that realistic classical computers will be able to instantiate consciousness, the practical CF claim. To say something about theoretical CF, we’d instead need to appeal to progress in techniques to run efficient simulations of many-body quantum systems or quantum fields.
CF is also sometimes supported by the success of the computational view of cognition. It has proven useful to model the brain as hardware that runs the software of the mind, via e.g. neuron spiking. The mind is a program simple enough to be encoded in neuron spiking (possibly plus some extra details e.g. glial cells). Such a suitably simple abstraction of the brain can then run on a computer to create consciousness—the practical CF claim.
So on the whole, I’m more interested in scrutinizing practical CF than theoretical CF. In the next post, I’ll scrutinize practical CF.
These flavors really fall on a spectrum: one can imagine claims in between the two (e.g. a “somewhat practical CF”).
1 second of simulated time is computed at least every second in base reality.
There could be a number of ways around this. We could use quantum monte carlo or density functional theory instead, both with complexity O(N^3), meaning a simulation would need 10^75 operations per timestep, once again roughly the size of the observable universe. We could also use quantum computers—reducing the complexity to possibly O(N), but this would be a departure from the Practical CF claim. Such a simulation on Earth with quantum computers is in principle possible from a glance, but there could easily be engineering roadblocks that make it impossible in practice.
Could the chips instead interface with, say, a Dyson sphere? The speed of light would get in the way there, since it would take ~minutes to send & receive messages, while neuron firing details are important at << seconds.