The Conscious River: Conscious Turing machines negate materialism
Many computer scientists and mathematically inclined people subscribe to the idea that consciousness is a phenomenon that emerges from matter. They also believe that any Turing machine can generate consciousness if provided with the correct program. This seems obvious due to the universality of Turing machines. If a Turing machine can simulate a human brain and consciousness arises from the matter in the brain, then the Turing machine would be conscious as well.
In this post, I propose a thought experiment that starts with the assumption that Turing machines can generate consciousness. I will show that from such a Turing machine, it is possible to build canals and pipes that constraint a river so that it performs the same algorithm, but violates our common-sense understanding of what it means to be conscious. From this, I infer that either consciousness is an illusion and nothing is truly conscious (materialism is wrong), Turing machines cannot be conscious, or common-sense properties of consciousness are not real.
While I don’t think that what is written here is anything particularly new for illusionists or materialists that do no believe that turning machines are automatically conscious if provided with the right program, I think that this article may help computationalists to understand materialists and illusionists positions.
Definitions
Consciousness: The collection of experiences that appear moment by moment in the minds of living beings, distinct from unconscious processes.
Illusionism: The view that phenomenal consciousness is an illusion; that is, living beings merely have the impression of being conscious, but they are not. (It’s not that Turing machines are conscious, but rather that humans are not conscious either.)
Materialism: The view that phenomenal consciousness emerges from matter.
Computationalism: The view that phenomenal consciousness emerges from certain kinds of computation and thus can arise from Turing machines, regardless of the material substrate.
Philosophical Zombie: A human-like entity that behaves exactly like a human but is not conscious.
If you already believe a river can run a program, you don’t care about seeing how it may be the case, skip to the analysis section and carry on from there.
Building a Conscious River
The first part of this article will focus on something that should be immediately obvious to computer scientists but may be unknown to those who do not work directly in the field. I will discuss how a program can be translated into a program that never modifies the same memory cell twice and never executes the same instruction twice. The reason for doing so is to show that if there exists a program that is conscious when executed, independent of the substrate, then it will be possible to run such a program using water flowing down a mountain, making the river conscious.
Lemma 1: Given any non-self-modifying program and a maximal number of operations to be executed, any program can be rewritten into a program that never executes the same operation twice.
Sketch of the proof: Unroll every loop. If a loop is not bounded, unroll it up to the maximal allowed number of operations. Inline the code after an if
statement in both the true and false branches of the loop. Inline every call in the caller. If there are recursive calls, inline them a number of times equal to the maximal number of operations. In other words, write the tree of all possible execution paths across the program.
This cannot be done in the general case when there is self-modifying code or when the program never terminates, but we do not care about those two cases. Machines can be Turing complete even without self-modifying code, so there is no loss of generality here. The maximal number of instructions to be executed is not an issue either since we can simply pick a number so large that it will never be reached before the end of the universe.
Lemma 2: Given any program, it is possible to rewrite the program so that no memory cell is ever written twice, except for pointers or “pointer-like” structures.
Sketch of the proof: Use copy-on-write. Every time a memory cell is to be written, stop the computation, update the pointers to point to a new memory zone, resume the computation, and let the write operation be performed there.
Lemma 3: A program that never executes the same instruction twice and never writes to the same memory cell twice can be laid out in space so that if one operation is located before another in the program, the substrate that performs that operation can be placed in space before the substrate of the subsequent operation. For example, the program can be computed by a river that flows through pipes.
Sketch of the proof: While the river has been chosen for intuitive purposes, many other substrates could have been picked—a custom ASIC that implements the program, or a set of marbles that flow through pipes as well. To convince yourself of this, just pick any YouTube video that shows how to do computation with marbles or water.
Result 1: Given a program that is not self-modifying and a maximal number of operations to execute, it is possible to build a set of pipes with mechanical switches that, when filled with water, computes that program up to the allowed number of operations without ever flowing uphill.
Proof: This follows immediately from the three lemmas.
Analysis: We have shown that for every “relevant” program, we can build a river that computes the same program flowing downward, without ever moving the water upstream. While this is not surprising when we think of it in terms of water flowing into pipes to perform additions, it is very surprising to think of a river flowing down pipes as being conscious, answering questions posed by humans by filling and emptying particular pipes on the river’s edge, while the “mind” of the river keeps moving downward.
Since we are operating under the assumption that Turing machines can be conscious, it follows that the river computing the same program would be conscious as well. The only way it would not be is if either:
Consciousness is tied to the property of self-modifying code, which I doubt anyone would claim.
Consciousness is tied to the computation substrate not being bound to an upper limit of operations, which seems absurd since it would imply that a computer can be conscious, but a computer with a time bomb is not.
The regular program is conscious, but the river is a philosophical zombie, which would suggest that humans are conscious but computers cannot be, and in general assert that consciousness is a property of the substrate, not of the computation.
The Bizarre Properties of the Conscious River
As we discussed in the previous section, if we assume that Turing machines can be conscious, then we can modify the river components to achieve various strange properties that violate our assumptions about consciousness. Some of these properties exist in currently existing computers as well; others illustrate more clearly the relationship between the substrate and consciousness, making it more intuitive that something strange is happening.
Stream of Consciousness
Materialistic non-computationalist views assert that consciousness arises from matter. If the conscious river can be built, such a position would be untenable. If consciousness arises from matter, then for a stream of consciousness to exist, at the very least, the same atoms should be temporarily involved in the flowing of the river. This is not the case for the river. Water may be segregated into sections, with switches that prevent water from flowing downstream, and new water may be released in the next section depending on how much water trickled down from above. In that case, no atom of water ever flows from top to bottom. Each atom is separated into its own section and never escapes it.
Multiple Consciousnesses
Imagine now that instead of being segregated into sections or water flowing continuously down the river, a finite amount of water is emitted from the top. After the water moves through part of the pipes, all switches are reset, and more water is emitted again. The various waves move down the pipes separated by some time, but they are all in the pipes at the same time.
Each emission of water would be an independent conscious being. Each would emit a different output when it reaches one of the output-emitting pipes, each performing a different computation. This modification to the river seems to suggest that there is no such thing as a “stream of consciousness,” but rather only “moments of consciousness” that have the illusion of being a stream because they can recall memories of previous moments.
The River Flowing in a Chinese Room
Imagine now we let the river, segregated into sections, flow once. We register the water levels at each output switch of a given section. Then we replace that entire section with an instrument that, when water flows from upstream, releases the same amount of water that was recorded from the previous execution. Then we reset the river and let the same amount of water flow from the top, just like the previous run. That is, the river flows the same way twice, except a section has been eliminated and replaced with a tool that emits the same amount of water as before.
Does this mean that there is an experience before the Chinese Room section and one after, but no experience in between? Does it mean that the water-releasing mechanism experiences all the “conscious moments” it replaced? The first alternative seems more compelling to me, and supports the idea that there is no such thing as a stream of consciousness, but only “moments of consciousness.”
Illusionism or What Is the River Length That Generates Consciousness?
Since we presuppose that the river is conscious, what is the minimal number of operations in Lemma 1 that generate a conscious river? If we build one that performs a single addition, is it conscious? Is it conscious after 1,000 additions? If we stop the water when only half of it has activated the switch of the next section, does the river experience consciousness with half intensity? Is the intensity related to the number of operations? Can we quantify the amount of consciousness?
Even if we assume that there are only “conscious moments,” when does the moment actually get finalized?
Before I started devising this mental experiment, I was a computationalist, but it now seems to me that if we want to save the premise that Turing machines can be conscious, the most likely candidate is the idea that experience is an illusion. Whatever is happening in our minds is identical to the action of recovering information from the senses or from memory, which can be done by any version of the conscious river. In that case, the river length would not be an issue—there was no consciousness to begin with.
Negating the Assumption
If illusionism is wrong and humans do have consciousness, then it seems to me that we can only negate the assumption that Turing machines can be conscious. If that is the case, then the only way out seems to be a kind of materialism akin to the one proposed by John Searle.
In that vision of the world, consciousness is unrelated to computation. Consciousness is a physical property of the real world, generated by certain substrates. Some conscious beings can perform symbolic computations within their consciousness, and using symbolic computations, their consciousness can self-reference their whole being by using a symbol as a proxy. That is, when another human is thinking about you or a memory of your, they are not thinking about you or a memory of you; their consciousness is filled with a symbol of you, or a symbol that describes a memory of you. The same is true when they are thinking about a past experience, it is merely a symbol that represents the past experience. An animal unable to perform symbolic computation would be conscious but unable to reference itself in its thoughts or understand its mortality. It could remember past thoughts, but it could only have instinctive responses to it, it could not use them to plan.
A computer simulating a brain would simulate the consciousness too, but just as the simulated brain does not exist, the simulated consciousness would not exist either. What would exist would be the symbolic computation performed by both the real brain and the simulated brain, which in both cases may perform self-reference by thinking about the symbols they associate with the human being they simulate, or their experiences.
This vision of the world solves all the problems. Humans remain conscious and Turing complete. Rivers remain non-sentient, and computers remain Turing complete, self-referential, potentially world-ending, but not conscious. They would be philosophical zombies, self-aware but without creating a physical object in the real world that holds their experience.
Why? I see no problem with the consciousness that constantly changes what atoms it is built on.
Well, OK? Doesn’t seem weird to me.
yes, those are computationalists views. Computationalism is pretty much self consistent since it says that any materialized computation can be conscious, and very similar to illusionism.
A few remarks that don’t add up to either agreement or disagreement with any point here:
Considering rivers conscious hasn’t been a difficulty for humans, as animism is a baseline impulse that develops even in absence of theism, and it takes effort, at either the individual or cultural levels, for people to learn not to anthropomorphize the world. As such, I’d suggest a thought experiment that allows for the possibility of a conscious river, even if composed of atomic moments of consciousness arising from strange flows through an extremely complex network of pipes, taps back, into that underlying animistic impulse, and so will only seem weird to those who’ve previously managed to supress it either via effort or nurture.
Conversely, as one can learn to suppress their animistic impulse towards the world, one can also suppress their animistic impulse towards themselves. Buddhism is the paradigmatic example of that effort. Most Buddhist schools of thought deny the reality of any kind of permanent self, asserting the perception of an “I” emerges from atomistic moments as an effect of those interactions, not as their cause or as a parallel process to them. From this perspective we may have a “non-conscious in itself” river whose pipe flows, interrupted or otherwise, cause the emergence of consciousness, exactly the same and in no way differently from what human minds do.
But even those Buddhist schools that do admit of a “something extra” at the root of the experience of consciousness, consider it as a form of matter that binds to ordinary matter to, operating as a single organic mixture, give rise to those moments of consciousness. This might correspond, or be an analogous on some level, to Searle’s symbols, at least going from the summarized view presented in this post. Now, irrespective of such symbols being or not reducible to ordinary matter, if they can “attach” to human brain’s matter to form, er, “carbon-based neuro-symbolic aggregates”, nothing in principle (that I can imagine, at least) prevents them from attaching to any other substrate, such a water pipes, at which point we’d have “water-based pipe-symbolic” ones. Such an aggregate might develop a mind of its own, and even a human-like mind, complete with a self-delusion that similarly believes that emergent self as essential.
As such, it’d seem to me that, without a fully developed “physics of symbols”, such speculations may go either way and don’t really help solve the issue. A full treatment of the topic would need to expand on all such possibilities, and then analyse them from perspectives such as the ones above, before properly contrasting them.
thanks, the consideration about the river is interesting. The reason i picked it is because i am trying to provide a non computer medium to explain implications of computers being conscious, in particular the fact that the whole mechanism can be laid down in a single direction in space. I could have picked a set of marbles running down pipes instead, but that would be less intuitive to those that have never seen a computer implemented with marbles. I am not sure which alternative would be best.
then just a clarification on symbols. symbols would not be the source of moment of consciousness. symbols would just be a syntactical constructs independent from consciousness, which can be manipulated both by some conscious beings such as humans and by computers. For example, a sheep is very clearly conscious, but if it does uses symbols, they are very simple symbols to keep track of geography, other sheep and stuff that is important for its survival, it is not a turing complete machine. in that view it is not a issue that symbols attach to any substrate, because they are unrelated to consciousness and simply muddle the water by introducing the ability of self referencing. The substrate independence of symbols does not extend to consciousness, because in that view it is the conscious mind that generates symbols not the other way around.
I lack the knowledge to express the following idea with the right words so forgive the ugly way of saying this: it is my understanding that to some degree one could even claim that the objective of buddhism (or at least zen buddhism) is to break the self referencing loop arising from symbols, since symbols are a prerequisite for self awareness and thus negative emotions. Without symbols one would be conscious and unable to worry about itself.