Trying to Disambiguate Different Questions about Whether Humans are Turing Machines
I often hear the sentiment that humans are Turing
machines, and that this
sets humans apart from other pieces of matter.
I’ve always found those statements a bit strange and confusing, so it
seems worth it to tease apart what they could mean.
The question “is a human a Turing machine” is probably meant
to convey “can a human mind execute arbitrary programs?”,
that is “are the languages the human brain emit at least recursively
enumerable?”,
as opposed to e.g.
context-free
languages.
My first reaction is that humans are definitely not Turing machines, because we lack the infinite amount of memory the Turing machine has in form of an (idealized) tape. Indeed, in the Chomsky hierarchy human aren’t even at the level of pushdown automata, instead we are nothing more than finite state automata. (I remember a professor pointing this out to us that all physical instantiations of computers are merely finite-state automata).
It might be that Quantum finite automata are of interest, but I don’t know enough about quantum physics to make a judgment call.
The above argument only applies if we regard humans as closed systems with clearly defined inputs and outputs. When probed, many proponents of the statement “humans are Turing machines” indeed fall back to a motte that in principle a human could execute every algorithm, given enough time, pen and paper.
This seems true to me, assuming that the matter in universe does not have a limited amount of computation it can perform.
On the one hand, I’d argue that, by orchestrating the exactly right circumstances, a tulip could receive specific stimuli to grow in the right directions, knock the correct things over, lift other things up with its roots, create offspring that perform subcomputations &c to execute arbitrary programs. Conway’s Game of Life certainly manages to! One might object that this is set up for the tulip to succeed, but we also put the human in a room with unlimited pens and papers.
On the other hand, those circumstances would have to be very exact, much more so than with humans. But that again is a difference in degree, not in kind.
After all, I’m coming down with the following conclusion: Humans are
certainly not Turing machines, however there might be a (much weaker)
notion of generality that humans fulfill and other physical systems don’t
(or don’t as much). But this notion of generality is purported to be
stronger than the one of life:
It’s not a topic I’ve heard debated for quite some time, but I generally saw it going the other direction. Not “humans are a general turing-complete processing system”, that’s clearly false, and kind of irrelevant. But rather “humans are fully implementable on a turing machine”, or “humans are logically identical to a specific tape on a turing machine”.
This was really just another way of asserting that consciousness is computation.
Not “humans are a general turing-complete processing system”, that’s clearly false
Critical rationalists often argue that this (or something very related) is true. I was not talking about whether humans are fully implementable on a Turing machine, that seems true to me, but was not the question I was interested in.
Can you point to one or two of the claims that the human brain is a general-purpose turing machine that can run any program? I don’t think I’d seen that, and it seems trivially disprovable by a single example. Most humans cannot perform even fairly simple arithmetic in their heads, let alone computations that would require a longer (but still finite) tape.
Of course, using tools, humans can construct turing-machine-equivalent mechanisms of large (but not infinite) size, but that seems like a much weaker claim than humans BEING such machines.
So all hardware limitations on us boil down to speed and memory capacity. And both of those can be augmented to the level of any other entity that is in the universe. Because if somebody builds a computer that can think faster than the brain, then we can use that very computer or that very technology to make our thinking go just as fast as that. So that’s the hardware.
[…]
So if we take the hardware, we know that our brains are Turing-complete bits of hardware, and therefore can exhibit the functionality of running any computable program and function.
So the more memory and time you give it, the more closely it could simulate the whole universe. But it couldn’t ever simulate the whole universe or anything near the whole universe because it is hard for it to simulate itself. Also, the sheer size of the universe is large.
I think this happens when people encounter the Deutsch’s claim that humans are universal explainers, and then misgeneralize the claim to Turing machines.
So the more interesting question is: Is there a computational class somewhere between FSAs and PDAs that is able to, given enough “resources”, execute arbitrary programs? What physical systems do these correspond to?
That seems an odd motte-and-bailey style explanation (and likely, belief. As you say, misgeneralized).
I will agree that humans can execute TINY arbitrary Turing calculations, and slightly less tiny (but still very small) with some external storage. And quite a bit larger with external storage and computation. At what point is the brain not doing the computation is perhaps an important crux in that claim, as is whether the ability to emulate a Turing machine in the conscious/intentional layer is the same as being Turing-complete in the meatware substrate.
And the bailey of “if we can expand storage and speed up computation, then it would be truly general” is kind of tautological, and kind of unjustified without figuring out HOW to expand storage and computation while remaining human.
Trying to Disambiguate Different Questions about Whether Humans are Turing Machines
I often hear the sentiment that humans are Turing machines, and that this sets humans apart from other pieces of matter.
I’ve always found those statements a bit strange and confusing, so it seems worth it to tease apart what they could mean.
The question “is a human a Turing machine” is probably meant to convey “can a human mind execute arbitrary programs?”, that is “are the languages the human brain emit at least recursively enumerable?”, as opposed to e.g. context-free languages.
My first reaction is that humans are definitely not Turing machines, because we lack the infinite amount of memory the Turing machine has in form of an (idealized) tape. Indeed, in the Chomsky hierarchy human aren’t even at the level of pushdown automata, instead we are nothing more than finite state automata. (I remember a professor pointing this out to us that all physical instantiations of computers are merely finite-state automata).
Depending on one’s interpretation of quantum mechanics, one might instead argue that we’re at least nondeterministic finite automata or even Markov chains. However, every nondeterministic finite automaton can be transformed into a deterministic finite automaton, albeit at an exponential increase in the number of states, and Markov chains aren’t more computationally powerful (e.g. they can’t recognize Dyck languages, just as DFAs can’t).
It might be that Quantum finite automata are of interest, but I don’t know enough about quantum physics to make a judgment call.
The above argument only applies if we regard humans as closed systems with clearly defined inputs and outputs. When probed, many proponents of the statement “humans are Turing machines” indeed fall back to a motte that in principle a human could execute every algorithm, given enough time, pen and paper.
This seems true to me, assuming that the matter in universe does not have a limited amount of computation it can perform.
In a finite universe we are logically isolated from almost all computable strings, which seems pretty relevant.
Another constraint is from computational complexity; should we treat things that are not polynomial-time computable as basically unknowable? Humans certainly can’t solve NP-complete problems efficiently.
I’m not sure this is a very useful notion.
On the one hand, I’d argue that, by orchestrating the exactly right circumstances, a tulip could receive specific stimuli to grow in the right directions, knock the correct things over, lift other things up with its roots, create offspring that perform subcomputations &c to execute arbitrary programs. Conway’s Game of Life certainly manages to! One might object that this is set up for the tulip to succeed, but we also put the human in a room with unlimited pens and papers.
On the other hand, those circumstances would have to be very exact, much more so than with humans. But that again is a difference in degree, not in kind.
After all, I’m coming down with the following conclusion: Humans are certainly not Turing machines, however there might be a (much weaker) notion of generality that humans fulfill and other physical systems don’t (or don’t as much). But this notion of generality is purported to be stronger than the one of life:
Turing-completeness⇒Context-sensitive⇒Context-free?⇒Proposed-generality?⇒Life?⇒Finite-state automata
I don’t know of any formulation of such a criterion of generality, but would be interested in seeing it fleshed out.
The claim that humans are at least TM’s is quite different to the claim that humans are at most TM’s. Only the second is computationalism.
Yes, I was interested in the first statement, and not thinking about the second statement.
It’s not a topic I’ve heard debated for quite some time, but I generally saw it going the other direction. Not “humans are a general turing-complete processing system”, that’s clearly false, and kind of irrelevant. But rather “humans are fully implementable on a turing machine”, or “humans are logically identical to a specific tape on a turing machine”.
This was really just another way of asserting that consciousness is computation.
Critical rationalists often argue that this (or something very related) is true. I was not talking about whether humans are fully implementable on a Turing machine, that seems true to me, but was not the question I was interested in.
Can you point to one or two of the claims that the human brain is a general-purpose turing machine that can run any program? I don’t think I’d seen that, and it seems trivially disprovable by a single example. Most humans cannot perform even fairly simple arithmetic in their heads, let alone computations that would require a longer (but still finite) tape.
Of course, using tools, humans can construct turing-machine-equivalent mechanisms of large (but not infinite) size, but that seems like a much weaker claim than humans BEING such machines.
On a twitter lent at the moment, but I remember this thread. There’s also a short section in an interview with David Deutsch:
and:
I think this happens when people encounter the Deutsch’s claim that humans are universal explainers, and then misgeneralize the claim to Turing machines.
So the more interesting question is: Is there a computational class somewhere between FSAs and PDAs that is able to, given enough “resources”, execute arbitrary programs? What physical systems do these correspond to?
Related: Are there cognitive realms? (Tsvi Benson-Tilsen, 2022)
That seems an odd motte-and-bailey style explanation (and likely, belief. As you say, misgeneralized).
I will agree that humans can execute TINY arbitrary Turing calculations, and slightly less tiny (but still very small) with some external storage. And quite a bit larger with external storage and computation. At what point is the brain not doing the computation is perhaps an important crux in that claim, as is whether the ability to emulate a Turing machine in the conscious/intentional layer is the same as being Turing-complete in the meatware substrate.
And the bailey of “if we can expand storage and speed up computation, then it would be truly general” is kind of tautological, and kind of unjustified without figuring out HOW to expand storage and computation while remaining human.
From my side or theirs?
From their side. Your explanation and arguments against that seem reasonable to me.
Generalized chess is EXPTIME-complete and while chess “exact solution” may be unavailable, we are pretty good at constructing chess engines.