“What if our brains just happen to be Turing machines?” Another way to ask this might be, “Is consciousness computable?” Might I suggest another question? “Does it matter that Turing machines are self-contained; in other words, that they do not sense or interact with their environment?” Our brains do not only sense and interact with our environment, they also sense and control our own bodies. And our bodies, at numerous levels, down to our individual cells, sense and control their own status. Does it matter that the canonical Turing machine has no sensory apparatus, other than that which detects ones and zeroes on the program tape? What if it instead also sensed the texture of the tape, the current through the motors and relays, the battery or mains voltage powering it, the feel of the tape advancing through the reader, whether or not it was in good repair, clean or dusty, oiled or unoiled. What if it detected tape jams and other malfunctions, and could take measures to correct the problem? What if it could sense the temperature of the room it was in, and control it, to the degree that temperature affected its function? What if it developed a taste for tapes of a particular brand? What if it found some programs more interesting than others? What if it detected the presence of other Turing machines? What if it had to compete with other Turing machines for access to electrical current, or to tapes of better quality? What if it found another Turing machine that shared some of its tastes, in tapes, programs, and electrical sources, but differed in others, and they discussed their preferences, each expanding the other’s appreciation for the resources they needed? Or found that they disagreed so profoundly, that they had to destroy the other before still other machines became infected with such bizarre programs?
Could all this sensing, all these preferences, and all the control mechanisms, operate off that one tape, threading back and forth through the reader?
Just phrasing these questions convinces me that the Turing machine model of consciousness fails, that consciousness is not an algorithm, and is not remotely computable. It also convinces me that consciousness is not programmable. It must always self-develop in, not just a brain, but a body that it can control, and use to affect the world it lives in.
“I don’t consciously control most of what’s going on in my body” Indeed not—and yet your brain controls many of these lowlevel functions—and is affected by them in turn. Your brain cells, for instance, are powered by mitochondria.
“I don’t find just phrasing these questions very convincing.” I admit, they weren’t really meant to be—to another person. But what went on in my head as I asked myself those questions persuaded me, my only claim.
“With these assumptions, operating from a single tape is not very limiting” Note that in my model, the operation of the tape reader itself is controlled by the tape, even to such things as clearing malfunctions. How deep does that rabbit hole go?
Randall’s cartoon is amusing, but how well do rocks handle non-deterministic quantum processes? Is it really the rocks doing the computations, or are they just outputting the computations that the desert dweller performs in his head? Can he really keep the current machine state in his head? Can machines like that adequately simulate the interactions between different particles and systems within our universe? I think there’s good reason to think that conscious beings do not arise in the universe the rock machine simulates.
One more point: another way of thinking about what I wrote is that I’m essentially asserting that an embodied consciousness is not, by itself, a finite state machine. It is merely one part of the entire universe in which it is embedded, which it senses, and is affected by in other ways, and which it affects.
I also recommend the implications of Adrian Thompson’s evolved tone discrimination circuit, implemented in a FPGA. Lacking a designed clock, the circuit instead used multiple feedback loops, which took a very long time to untangle. Also, the final version of the circuit employed elements that were not in the designed signal paths of the array. Though the array was intended to be digital, the evolved circuit used analog effects to communicate with the isolated elements. How would a finite state machine represent these effects?