I reject the computational paradigm of mind in its most ambitious form, the one which says that mind is nothing but computation—a notion which, outside of rigorous computer science, isn’t even well-defined in these discussions.
One issue that people blithely pass by when they just assume computationalism, is meaning—“representational content”. Thoughts, mental states, are about things. If you “believe in physics”, and are coming from a naturalistic perspective, then meaning, intentionality, is one of the great conundrums, up there with sensory qualia. Computationalism offers no explanation of what it means for a bunch of atoms to be about something, but it does make it easy to sail past the issue without even noticing, because there is a purely syntactic notion of computation denuded of semantics, and then there is a semantic notion of computation in which computational states are treated as having meanings embedded into their definition. So all you have to do is to say that the brain “computes”, and then equivocate between syntactic computation and semantic computation, between the brain as physical state machine and the mind as semantic state machine.
The technological object “computer” is a semantic state machine, but only in the same way that a book has meaning—because of human custom and human design. Objectively, it is just a syntactic state machine, and in principle its computations could be “about” anything that’s isomorphic to them. But actual states of mind have an objective intrinsic semantics.
Ultimately, I believe that meaning is grounded in consciousness, that there are “semantic qualia” too; that the usual ontologies of physics must be wrong, because they contain no such things—though perhaps the mathematics of some theory of physics not too distant from what we already have, can be reinterpreted in terms of a new ontology that has room for the brain having such properties.
But until such time as all of that is worked out, computationalism will persist as a pretender to the title of the true philosophy of mind, incidentally empowering numerous mistaken notions about the future interplay of mind and technology. In terms of this placeholder theory of conscious quantum vortices, there’s no problem with the idea of neural prostheses that work with your vortex, or of conscious vortices in something other than a biological brain; but if a simulation of a vortex isn’t itself a vortex, then it won’t be conscious.
According to theories of this nature, in which the ultimate substrate of consciousness is substance rather than computation, the very idea of a “conscious program” is a conceptual error. Programs are not the sorts of things that are conscious; they are a type of virtual state machine that runs on a Turing-universal physical state machine. Specifically, a computer program is a virtual machine designed to preserve the correctness of a particular semantic interpretation of its states. That’s the best ontological characterization of what a computer program is, that I can presently offer. (I’m assuming a notion of computation that is not purely syntactic—that the computations performed by the program are supposed to be about something.)
Incidentally, I coughed up this vortex notion, not because it solves the ontological problem of intentional states, but just because knotted vortex lines are a real thing from physics that have what I deem to be properties necessary in a physical theory of consciousness. They have complex internal states (their topology) and they have an objective physical boundary. The states usually considered in computational neuroscience have a sorites problem; from a microphysical perspective, that considers what everything is really made of, they are defined extremely vaguely, akin to thermodynamic states. This is OK if we’re talking about unconscious computations, because they only have to exist in a functional sense; if the required computational mappings are performed most of the time under reasonable circumstances, then we don’t have to worry about the inherent impreciseness of the microphysical definition of those states.
But conscious states have to be an objective and exact part of any ultimate ontology. Consciousness is not a fuzzy idea which humans made up and which may or may not be part of reality. In a sense, it is your local part of reality, the part of reality that you know is there. It therefore cannot be regarded as a thing which exists approximately or vaguely or by convention, all of which can be said of thermodynamic properties and of computational states that don’t have a microphysically exact definition. The quantum vortex in your cortex is, by hypothesis, something whose states have a microphysically exact definition, and so by my physical criterion, it at least has a chance of being the right theory.
incidentally empowering numerous mistaken notions about the future interplay of mind and technology.
Is that a prediction then? That your family and friends could somehow recognize the difference between you and a simulated copy of you? That the simulated copy of you would somehow not perceive itself as you? That the process just can’t work and can’t create anything recognizably conscious, intelligent, or human? (and does that mean strong AI needs to run on something other than a computer?) Or are you thinking it will be a philosophical zombie, and everyone will be fooled into thinking its you?
What do you think will actually happen, if/when we try to simulate stuff? Let’s just say that we can do it roughly down to the molecular level.
states have a microphysically exact definition
What precludes us from simulating something down to the sufficiently, micro physically exact level? (I understand that you’ve got a physical theory of consciousness, but i’m trying to figure out how this micro-physical stuff plays into it)
That the simulated copy of you would somehow not perceive itself as you? That the process just can’t work and can’t create anything recognizably conscious, intelligent, or human?
Don’t worry—the comments by Mitchell_Porter in this comment thread were actually written by a vortexless simulation of an entirely separate envortexed individual who also comments under that account. So here, all of the apparent semantic content of “Mitchell_Porter”’s comments is illusory. The comments are actually meaningless syntactically-generated junk—just the emissions of a very complex ELIZA chatbot.
What do you think will actually happen, if/when we try to simulate stuff?
I’ll tell you what I think won’t happen: real feelings, real thoughts, real experiences.
A computational theory of consciousness implies that all conscious experiences are essentially computations, and that the same experience will therefore occur inside anything that performs the same computation, even if the “computer” is a network of toppling dominoes, random pedestrians making marks on walls according to small rulebooks, or any other bizarre thing that implements a state machine.
This belief derives entirely from one theory of one example—the computational theory of consciousness in the human brain. That is, we perceive that thinking and experiencing have something to do with brain activity, and one theory of the relationship, is that conscious states are states of a virtual machine implemented by the brain.
I suggest that this is just a naive idea, and that future neuroscientific and conceptual progress will take us back to the idea that the substrate of consciousness is substance, not computation; and that the real significance of computation for our understanding of consciousness, will be that it is possible to simulate consciousness without creating it.
From a physical perspective, computational states have the vagueness of all functional, user-dependent concepts. What is a chair? Perhaps, anything you can sit on. But people have different tastes, whether you can tolerate sitting on a particular object may vary, and so on. “Chair” is not an objective category; in regions of design-space far from prototypical examples of a chair, there are edge cases whose status is simply disputed or questionable.
Exactly the same may be said of computational states. The states of a transistor are a prototypical example of a physical realization of binary computational states. But as we consider increasingly messy or unreliable instantiations, it becomes increasingly difficult to just say, yes, that’s a 0 or a 1.
Consider the implications of this for a theory of consciousness which says, that the necessary and sufficient condition for the occurrence of a given state of consciousness, is the occurrence of a specific “computational state”. It means that whether or not a particular consciousness exists, is not a yes-or-no thing—it’s a matter of convention or definition or where you draw the line in state space.
This is untenable in exactly the same way that Copenhagenist complacency about the state of reality in quantum mechanics is untenable. It makes no sense to say that the electron has a position, but not a definite position, and it makes no sense to say that consciousness is a physical thing, but that whether or not it exists in a specific physical situation is objectively indeterminate.
If you are going to say that consciousness depends on the state of the physical universe, there must be a mapping which gives unique and specific answers for all possible physical states. There cannot be edge cases that are intrinsically undetermined, because consciousness is an objective reality, whereas chairness is an imputed property.
The eerie dualism of computer theories of consciousness, whereby the simulated experience mystically hovers over or dwells within the computer mainframe, chain of dominos, etc—present in the same way, regardless of what the “computer” is made of—might already have served as a clue that there was something wrong about this outlook. But the problem in developing this criticism is that we don’t really know how to make a nondualistic alternative work.
Suppose that the science of tomorrow came to the conclusion that the only things in the world that can be conscious, are knots of flux in elementary force fields. Bravo, it’s a microphysically unambiguous criterion… but it’s still going to be property dualism. The physical property “knotted in a certain madly elaborate shape”, and the subjective property “having a certain intricate experience”, are still not the same thing. The eerie dualism is still there, it’s just that it’s now limited to lines of flux, and doesn’t extend to bitstreams of toppling dominoes, Searlean language rooms, and so on. We would still have the strictly physical picture of the universe, and then streams of consciousness would be an extra thing added to that picture of reality, according to some laws of psychophysical correlation.
However, I think this physical turn, away from the virtual-machine theory of consciousness, at least brings us a little closer to nondualism. It’s still hard to imagine, but I see more potential on this path, for a future theory of nature in which there is a conscious self, that is also a physical entity somewhere on the continuum of physical entities in nature, and in which there’s no need to say “physically it’s this, but subjectively it’s that”—a theory in which we can speak of the self’s conscious state, and its causal physical interactions, in the same unified language. But I do not see how that will ever happen with a purely computational theory, where there will always be a distinction between the purely physical description, and the coarse-grained computational description that is in turn associated with conscious experience.
What do you think will actually happen, if/when we try to simulate stuff?
I’ll tell you what I think won’t happen: real feelings, real thoughts, real experiences.
It’ll still be pretty cool when the philosophical zombie uploads who act exactly like qualia-carrying humans go ahead and build the galactic supercivilization of trillions of philosophical zombie uploads acting exactly like people and produce massive amounts of science, technology and culture. Most likely there will even be some biological humans around, so you won’t even have to worry about nobody ever getting to experience any of it.
Actually because the zombie uploads are capable of all the same reasoning as M_P, they will figure out that they’re not conscious, and replace themselves with biological humans.
On the other hand, maybe they’ll discover that biological humans aren’t conscious either, they just say they are for reasons that are causally isomorphic to the reasons for which the uploads initially thought they were conscious, and then they’ll set out to find a substrate that really allows for consciousness.
How do you respond to the thought experiment where your neurons (and glial
cells and whatever) are replaced one-by-one with tiny workalikes made out of
non-biological material? Specifically, would you be able to tell the
difference? Would you still be conscious when the replacement process was
complete? (Or do you think the thought experiment contains flawed
assumptions?)
Feel free to direct me to another comment if you’ve answered this elsewhere.
My scenario violates the assumption that a conscious being consists of independent replaceable parts.
Just to be concrete: let’s suppose that the fundamental physical reality consists of knotted loops in three-dimensional space. Geometry comes from a ubiquitous background of linked simple loops like chain-mail, other particles and forces are other sorts of loops woven through this background, and physical change is change in the topology of the weave.
Add to this the idea that consciousness is always a state of a single loop, that the property of the loop which matters is its topology, and that the substrate of human consciousness is a single incredibly complex loop. Maybe it’s an electromagnetic flux-loop, coiled around the microtubules of a billion cortical neurons.
In such a scenario, to replace one of these “consciousness neurons”, you don’t just emulate an input-output function, you have to reproduce the coupling between local structures and the extended single object which is the true locus of consciousness. Maybe some nano-solenoids embedded in your solid-state neuromorphic chips can do the trick.
Bear in mind that the “conscious loop” in this story is not meant to be epiphenomenal. Again, I’ll just make up some details: information is encoded in the topology of the loop, the loop topology interacts with electron bands in the microtubules, the electrons in the microtubules feel the action potential and modulate the transport of neurotransmitters to the vesicles. The single extended loop interacts with the localized information processing that we know from today’s neuroscience.
So what would happen if you progressively replaced the neurons of a brain with elements that simply did not provide an anchor for an extended loop? Let’s suppose that, instead of having nano-solenoids anchoring a single conscious flux-loop, you just have an extra type of message-passing between the neurochips, which emulates the spooling of flux-topological information. The answer is that you now have a “zombie”, an unconscious entity which has been designed in imitation of a conscious being.
Of course, all these hypotheses and details are just meant to be illustrative. I expect that the actual tie between consciousness and microphysics will be harder to understand than “conscious information maps to knots in a loop of flux”.
So what would happen if you progressively replaced the neurons of a brain with elements that simply did not provide an anchor for an extended loop? Let’s suppose that, instead of having nano-solenoids anchoring a single conscious flux-loop, you just have an extra type of message-passing between the neurochips, which emulates the spooling of flux-topological information. The answer is that you now have a “zombie”, an unconscious entity which has been designed in imitation of a conscious being.
This is done one neuron at a time, though, with the person awake and narrating what they feel so that we can see if everything is going fine. Shouldn’t some sequence of neuron replacement lead to the replacement of neurons that were previously providing consciously accessible qualia to the remaining biological neurons that still host most of the person’s consciousness? And shouldn’t this lead to a noticeable cognitive impairment they can report, if they’re still using their biological neurons to control speech (we’d probably want to keep this the case as long as possible)?
Is this really a thing where you can’t actually go ahead and say that if the theory is true, the simple neurons-as-black-boxes replacement procedure should lead to progressive cognitive impairment and probably catatonia, and if the person keeps saying everything is fine throughout the procedure, then there might be something to the hypothesis of people being made of parts after all? This isn’t building a chatbot that has been explicitly designed to mimic high-level human behavior. The neuron replacers know about neurons, nothing more. If our model of what neurons do is sufficiently wrong, then the aggregate of simulated neurons isn’t going to go zombie, it’s just not going to work because it’s copying the original connectome that only makes sense if all the relevant physics are in play.
My basic point was just that, if consciousness is only a property of a specific physical entity (e.g. a long knotted loop of planck-flux), and if your artificial brain doesn’t contain any of those (e.g. it is made entirely of short trivial loops of planck-flux), then it won’t be conscious, even if it simulates such an entity.
I will address your questions in a moment, but first I want to put this discussion back in context.
Qualia are part of reality, but they are not part of our current physical theory. Therefore, if we are going to talk about them at all, while focusing on brains, there is going to be some sort of dualism. In this discussion, there are two types of property dualism under consideration.
According to one, qualia, and conscious states generally, are correlated with computational states which are coarse-grainings of the microphysical details of the brain. Coarse-graining means that the vast majority of those details do not matter for the definition of the computational state.
According to the other sort of theory, which I have been advocating, qualia and conscious states map to some exact combination of exact microphysical properties. The knotted loop of planck-flux, winding through the graviton weave in the vicinity of important neurons, etc., has been introduced to make this option concrete.
My actual opinion is that neither of these is likely to be correct, but that the second should be closer to the truth than the first. I would like to get away from property dualism entirely, but it will be hard to do that if the physical correlate of consciousness is a coarse-grained computational state, because there is already a sort of dualism built into that concept—a dualism between the exact microphysical state and the coarse-grained state. These coarse-grained states are conceptual constructs, equivalence classes that are vague at the edges and with no
prospect of being made exact in a nonarbitrary way, so are they just intrinsically unpromising as an ontological substrate for consciousness. I’m not arguing with the validity of computational neuroscience and coarse-grained causal analysis, I’m just saying it’s not the whole story. When we get to the truth about mind and matter, it’s going to be more new-age than it is cyberpunk, more organic than it is algorithmic, more physical than it is virtual. You can’t create consciousness just by pushing bits around, it’s something far more embedded in the substance of reality. That’s my “prediction”.
Now back to your comment. You say, if consciousness—and conscious cognition—really depends on some exotic quantum entity woven through the familiar neurons, shouldn’t progressive replacement of biological neurons with non-quantum prostheses lead to a contraction of conscious experience and an observable alteration and impairment of behavior, as the substitution progresses? I agree that this is a reasonable expectation, if you have in mind Hans Moravec’s specific scenario, in which neurons are being replaced one at a time and while the subject is intellectually active and interacting with their environment.
Whether Moravec’s scenario is itself reasonable is another thing. There are about 30 million seconds in a year and there are billions of neurons just in the cortex alone. The cortical neurons are very entangled with each
other via their axons. It would be very remarkable if a real procedure of whole-brain neural substitution didn’t involve periods of functional impairment, as major modules of the brain are removed and then replaced with prosthesis.
I also find it very unlikely that attempting a Moravec procedure of neuronal replacement, and seeing what happens, will be important as a test of such rival paradigms of consciousness. I suppose you’re thinking in terms of a hypothetical computational theory of neurons whose advocates consider it good enough to serve as the basis of a Moravec procedure, versus skeptics who think that something is being left out of the model.
But inserting functional replacements for individual cortical neurons in vivo will require very advanced technology. For people wishing to conduct experiments in mind emulation, it will be much easier to employ the freeze-slice-and-scan paradigm currently contemplated for C. elegans, plus state-machine models from functional imaging for brain regions where function really is coarser in its implementation. Meanwhile, on the quantum side, while there certainly need to be radical advances in the application of concepts from condensed-matter physics to living matter, if the hypothesized quantum aspects of neuronal function are to be located… I think the really big advances that are required, must be relatively simple. Alien to our current understandings, which is why they are hard to attain, but nonetheless simple, in the way that the defining concepts of physics are simple.
There ought to be a physical-ontological paradigm which simultaneously (1) explains the reality behind some theory-of-everything mathematical formalism (2) explains how a particular class of entities from the theory can be understood as conscious entities (3) makes it clear how a physical system like the human brain could contain one such entity with the known complexity of human consciousness. Because it has to forge a deep connection between two separate spheres of human knowledge—natural science and phenomenology of consciousness—new basic principles are needed, not just technical elaborations of known ways of thinking. So neurohacking exercises like brain emulation are likely to be not very relevant to the discovery of such a paradigm. It will come from inspired high-level thinking, working with a few crucial facts; and then the paradigm will be used to guide the neurohacking—it’s the thing that will allow us to know what we’re doing.
meaning—“representational content”. Thoughts, mental states, are about things. If you “believe in physics”, and are coming from a naturalistic perspective, then meaning, intentionality, is one of the great conundrums, up there with sensory qualia. Computationalism offers no explanation of what it means for a bunch of atoms to be about something
What do you think of Eliezer’s approach to the “meaning” problem in The Simple Truth? I find the claim that the pebble system is about the sheep to be intuitively satisfying.
I reject the computational paradigm of mind in its most ambitious form, the one which says that mind is nothing but computation—a notion which, outside of rigorous computer science, isn’t even well-defined in these discussions.
One issue that people blithely pass by when they just assume computationalism, is meaning—“representational content”. Thoughts, mental states, are about things. If you “believe in physics”, and are coming from a naturalistic perspective, then meaning, intentionality, is one of the great conundrums, up there with sensory qualia. Computationalism offers no explanation of what it means for a bunch of atoms to be about something, but it does make it easy to sail past the issue without even noticing, because there is a purely syntactic notion of computation denuded of semantics, and then there is a semantic notion of computation in which computational states are treated as having meanings embedded into their definition. So all you have to do is to say that the brain “computes”, and then equivocate between syntactic computation and semantic computation, between the brain as physical state machine and the mind as semantic state machine.
The technological object “computer” is a semantic state machine, but only in the same way that a book has meaning—because of human custom and human design. Objectively, it is just a syntactic state machine, and in principle its computations could be “about” anything that’s isomorphic to them. But actual states of mind have an objective intrinsic semantics.
Ultimately, I believe that meaning is grounded in consciousness, that there are “semantic qualia” too; that the usual ontologies of physics must be wrong, because they contain no such things—though perhaps the mathematics of some theory of physics not too distant from what we already have, can be reinterpreted in terms of a new ontology that has room for the brain having such properties.
But until such time as all of that is worked out, computationalism will persist as a pretender to the title of the true philosophy of mind, incidentally empowering numerous mistaken notions about the future interplay of mind and technology. In terms of this placeholder theory of conscious quantum vortices, there’s no problem with the idea of neural prostheses that work with your vortex, or of conscious vortices in something other than a biological brain; but if a simulation of a vortex isn’t itself a vortex, then it won’t be conscious.
According to theories of this nature, in which the ultimate substrate of consciousness is substance rather than computation, the very idea of a “conscious program” is a conceptual error. Programs are not the sorts of things that are conscious; they are a type of virtual state machine that runs on a Turing-universal physical state machine. Specifically, a computer program is a virtual machine designed to preserve the correctness of a particular semantic interpretation of its states. That’s the best ontological characterization of what a computer program is, that I can presently offer. (I’m assuming a notion of computation that is not purely syntactic—that the computations performed by the program are supposed to be about something.)
Incidentally, I coughed up this vortex notion, not because it solves the ontological problem of intentional states, but just because knotted vortex lines are a real thing from physics that have what I deem to be properties necessary in a physical theory of consciousness. They have complex internal states (their topology) and they have an objective physical boundary. The states usually considered in computational neuroscience have a sorites problem; from a microphysical perspective, that considers what everything is really made of, they are defined extremely vaguely, akin to thermodynamic states. This is OK if we’re talking about unconscious computations, because they only have to exist in a functional sense; if the required computational mappings are performed most of the time under reasonable circumstances, then we don’t have to worry about the inherent impreciseness of the microphysical definition of those states.
But conscious states have to be an objective and exact part of any ultimate ontology. Consciousness is not a fuzzy idea which humans made up and which may or may not be part of reality. In a sense, it is your local part of reality, the part of reality that you know is there. It therefore cannot be regarded as a thing which exists approximately or vaguely or by convention, all of which can be said of thermodynamic properties and of computational states that don’t have a microphysically exact definition. The quantum vortex in your cortex is, by hypothesis, something whose states have a microphysically exact definition, and so by my physical criterion, it at least has a chance of being the right theory.
Is that a prediction then? That your family and friends could somehow recognize the difference between you and a simulated copy of you? That the simulated copy of you would somehow not perceive itself as you? That the process just can’t work and can’t create anything recognizably conscious, intelligent, or human? (and does that mean strong AI needs to run on something other than a computer?) Or are you thinking it will be a philosophical zombie, and everyone will be fooled into thinking its you?
What do you think will actually happen, if/when we try to simulate stuff? Let’s just say that we can do it roughly down to the molecular level.
What precludes us from simulating something down to the sufficiently, micro physically exact level? (I understand that you’ve got a physical theory of consciousness, but i’m trying to figure out how this micro-physical stuff plays into it)
Don’t worry—the comments by Mitchell_Porter in this comment thread were actually written by a vortexless simulation of an entirely separate envortexed individual who also comments under that account. So here, all of the apparent semantic content of “Mitchell_Porter”’s comments is illusory. The comments are actually meaningless syntactically-generated junk—just the emissions of a very complex ELIZA chatbot.
I’ll tell you what I think won’t happen: real feelings, real thoughts, real experiences.
A computational theory of consciousness implies that all conscious experiences are essentially computations, and that the same experience will therefore occur inside anything that performs the same computation, even if the “computer” is a network of toppling dominoes, random pedestrians making marks on walls according to small rulebooks, or any other bizarre thing that implements a state machine.
This belief derives entirely from one theory of one example—the computational theory of consciousness in the human brain. That is, we perceive that thinking and experiencing have something to do with brain activity, and one theory of the relationship, is that conscious states are states of a virtual machine implemented by the brain.
I suggest that this is just a naive idea, and that future neuroscientific and conceptual progress will take us back to the idea that the substrate of consciousness is substance, not computation; and that the real significance of computation for our understanding of consciousness, will be that it is possible to simulate consciousness without creating it.
From a physical perspective, computational states have the vagueness of all functional, user-dependent concepts. What is a chair? Perhaps, anything you can sit on. But people have different tastes, whether you can tolerate sitting on a particular object may vary, and so on. “Chair” is not an objective category; in regions of design-space far from prototypical examples of a chair, there are edge cases whose status is simply disputed or questionable.
Exactly the same may be said of computational states. The states of a transistor are a prototypical example of a physical realization of binary computational states. But as we consider increasingly messy or unreliable instantiations, it becomes increasingly difficult to just say, yes, that’s a 0 or a 1.
Consider the implications of this for a theory of consciousness which says, that the necessary and sufficient condition for the occurrence of a given state of consciousness, is the occurrence of a specific “computational state”. It means that whether or not a particular consciousness exists, is not a yes-or-no thing—it’s a matter of convention or definition or where you draw the line in state space.
This is untenable in exactly the same way that Copenhagenist complacency about the state of reality in quantum mechanics is untenable. It makes no sense to say that the electron has a position, but not a definite position, and it makes no sense to say that consciousness is a physical thing, but that whether or not it exists in a specific physical situation is objectively indeterminate.
If you are going to say that consciousness depends on the state of the physical universe, there must be a mapping which gives unique and specific answers for all possible physical states. There cannot be edge cases that are intrinsically undetermined, because consciousness is an objective reality, whereas chairness is an imputed property.
The eerie dualism of computer theories of consciousness, whereby the simulated experience mystically hovers over or dwells within the computer mainframe, chain of dominos, etc—present in the same way, regardless of what the “computer” is made of—might already have served as a clue that there was something wrong about this outlook. But the problem in developing this criticism is that we don’t really know how to make a nondualistic alternative work.
Suppose that the science of tomorrow came to the conclusion that the only things in the world that can be conscious, are knots of flux in elementary force fields. Bravo, it’s a microphysically unambiguous criterion… but it’s still going to be property dualism. The physical property “knotted in a certain madly elaborate shape”, and the subjective property “having a certain intricate experience”, are still not the same thing. The eerie dualism is still there, it’s just that it’s now limited to lines of flux, and doesn’t extend to bitstreams of toppling dominoes, Searlean language rooms, and so on. We would still have the strictly physical picture of the universe, and then streams of consciousness would be an extra thing added to that picture of reality, according to some laws of psychophysical correlation.
However, I think this physical turn, away from the virtual-machine theory of consciousness, at least brings us a little closer to nondualism. It’s still hard to imagine, but I see more potential on this path, for a future theory of nature in which there is a conscious self, that is also a physical entity somewhere on the continuum of physical entities in nature, and in which there’s no need to say “physically it’s this, but subjectively it’s that”—a theory in which we can speak of the self’s conscious state, and its causal physical interactions, in the same unified language. But I do not see how that will ever happen with a purely computational theory, where there will always be a distinction between the purely physical description, and the coarse-grained computational description that is in turn associated with conscious experience.
It’ll still be pretty cool when the philosophical zombie uploads who act exactly like qualia-carrying humans go ahead and build the galactic supercivilization of trillions of philosophical zombie uploads acting exactly like people and produce massive amounts of science, technology and culture. Most likely there will even be some biological humans around, so you won’t even have to worry about nobody ever getting to experience any of it.
Actually because the zombie uploads are capable of all the same reasoning as M_P, they will figure out that they’re not conscious, and replace themselves with biological humans.
On the other hand, maybe they’ll discover that biological humans aren’t conscious either, they just say they are for reasons that are causally isomorphic to the reasons for which the uploads initially thought they were conscious, and then they’ll set out to find a substrate that really allows for consciousness.
How do you respond to the thought experiment where your neurons (and glial cells and whatever) are replaced one-by-one with tiny workalikes made out of non-biological material? Specifically, would you be able to tell the difference? Would you still be conscious when the replacement process was complete? (Or do you think the thought experiment contains flawed assumptions?)
Feel free to direct me to another comment if you’ve answered this elsewhere.
My scenario violates the assumption that a conscious being consists of independent replaceable parts.
Just to be concrete: let’s suppose that the fundamental physical reality consists of knotted loops in three-dimensional space. Geometry comes from a ubiquitous background of linked simple loops like chain-mail, other particles and forces are other sorts of loops woven through this background, and physical change is change in the topology of the weave.
Add to this the idea that consciousness is always a state of a single loop, that the property of the loop which matters is its topology, and that the substrate of human consciousness is a single incredibly complex loop. Maybe it’s an electromagnetic flux-loop, coiled around the microtubules of a billion cortical neurons.
In such a scenario, to replace one of these “consciousness neurons”, you don’t just emulate an input-output function, you have to reproduce the coupling between local structures and the extended single object which is the true locus of consciousness. Maybe some nano-solenoids embedded in your solid-state neuromorphic chips can do the trick.
Bear in mind that the “conscious loop” in this story is not meant to be epiphenomenal. Again, I’ll just make up some details: information is encoded in the topology of the loop, the loop topology interacts with electron bands in the microtubules, the electrons in the microtubules feel the action potential and modulate the transport of neurotransmitters to the vesicles. The single extended loop interacts with the localized information processing that we know from today’s neuroscience.
So what would happen if you progressively replaced the neurons of a brain with elements that simply did not provide an anchor for an extended loop? Let’s suppose that, instead of having nano-solenoids anchoring a single conscious flux-loop, you just have an extra type of message-passing between the neurochips, which emulates the spooling of flux-topological information. The answer is that you now have a “zombie”, an unconscious entity which has been designed in imitation of a conscious being.
Of course, all these hypotheses and details are just meant to be illustrative. I expect that the actual tie between consciousness and microphysics will be harder to understand than “conscious information maps to knots in a loop of flux”.
This is done one neuron at a time, though, with the person awake and narrating what they feel so that we can see if everything is going fine. Shouldn’t some sequence of neuron replacement lead to the replacement of neurons that were previously providing consciously accessible qualia to the remaining biological neurons that still host most of the person’s consciousness? And shouldn’t this lead to a noticeable cognitive impairment they can report, if they’re still using their biological neurons to control speech (we’d probably want to keep this the case as long as possible)?
Is this really a thing where you can’t actually go ahead and say that if the theory is true, the simple neurons-as-black-boxes replacement procedure should lead to progressive cognitive impairment and probably catatonia, and if the person keeps saying everything is fine throughout the procedure, then there might be something to the hypothesis of people being made of parts after all? This isn’t building a chatbot that has been explicitly designed to mimic high-level human behavior. The neuron replacers know about neurons, nothing more. If our model of what neurons do is sufficiently wrong, then the aggregate of simulated neurons isn’t going to go zombie, it’s just not going to work because it’s copying the original connectome that only makes sense if all the relevant physics are in play.
My basic point was just that, if consciousness is only a property of a specific physical entity (e.g. a long knotted loop of planck-flux), and if your artificial brain doesn’t contain any of those (e.g. it is made entirely of short trivial loops of planck-flux), then it won’t be conscious, even if it simulates such an entity.
I will address your questions in a moment, but first I want to put this discussion back in context.
Qualia are part of reality, but they are not part of our current physical theory. Therefore, if we are going to talk about them at all, while focusing on brains, there is going to be some sort of dualism. In this discussion, there are two types of property dualism under consideration.
According to one, qualia, and conscious states generally, are correlated with computational states which are coarse-grainings of the microphysical details of the brain. Coarse-graining means that the vast majority of those details do not matter for the definition of the computational state.
According to the other sort of theory, which I have been advocating, qualia and conscious states map to some exact combination of exact microphysical properties. The knotted loop of planck-flux, winding through the graviton weave in the vicinity of important neurons, etc., has been introduced to make this option concrete.
My actual opinion is that neither of these is likely to be correct, but that the second should be closer to the truth than the first. I would like to get away from property dualism entirely, but it will be hard to do that if the physical correlate of consciousness is a coarse-grained computational state, because there is already a sort of dualism built into that concept—a dualism between the exact microphysical state and the coarse-grained state. These coarse-grained states are conceptual constructs, equivalence classes that are vague at the edges and with no prospect of being made exact in a nonarbitrary way, so are they just intrinsically unpromising as an ontological substrate for consciousness. I’m not arguing with the validity of computational neuroscience and coarse-grained causal analysis, I’m just saying it’s not the whole story. When we get to the truth about mind and matter, it’s going to be more new-age than it is cyberpunk, more organic than it is algorithmic, more physical than it is virtual. You can’t create consciousness just by pushing bits around, it’s something far more embedded in the substance of reality. That’s my “prediction”.
Now back to your comment. You say, if consciousness—and conscious cognition—really depends on some exotic quantum entity woven through the familiar neurons, shouldn’t progressive replacement of biological neurons with non-quantum prostheses lead to a contraction of conscious experience and an observable alteration and impairment of behavior, as the substitution progresses? I agree that this is a reasonable expectation, if you have in mind Hans Moravec’s specific scenario, in which neurons are being replaced one at a time and while the subject is intellectually active and interacting with their environment.
Whether Moravec’s scenario is itself reasonable is another thing. There are about 30 million seconds in a year and there are billions of neurons just in the cortex alone. The cortical neurons are very entangled with each other via their axons. It would be very remarkable if a real procedure of whole-brain neural substitution didn’t involve periods of functional impairment, as major modules of the brain are removed and then replaced with prosthesis.
I also find it very unlikely that attempting a Moravec procedure of neuronal replacement, and seeing what happens, will be important as a test of such rival paradigms of consciousness. I suppose you’re thinking in terms of a hypothetical computational theory of neurons whose advocates consider it good enough to serve as the basis of a Moravec procedure, versus skeptics who think that something is being left out of the model.
But inserting functional replacements for individual cortical neurons in vivo will require very advanced technology. For people wishing to conduct experiments in mind emulation, it will be much easier to employ the freeze-slice-and-scan paradigm currently contemplated for C. elegans, plus state-machine models from functional imaging for brain regions where function really is coarser in its implementation. Meanwhile, on the quantum side, while there certainly need to be radical advances in the application of concepts from condensed-matter physics to living matter, if the hypothesized quantum aspects of neuronal function are to be located… I think the really big advances that are required, must be relatively simple. Alien to our current understandings, which is why they are hard to attain, but nonetheless simple, in the way that the defining concepts of physics are simple.
There ought to be a physical-ontological paradigm which simultaneously (1) explains the reality behind some theory-of-everything mathematical formalism (2) explains how a particular class of entities from the theory can be understood as conscious entities (3) makes it clear how a physical system like the human brain could contain one such entity with the known complexity of human consciousness. Because it has to forge a deep connection between two separate spheres of human knowledge—natural science and phenomenology of consciousness—new basic principles are needed, not just technical elaborations of known ways of thinking. So neurohacking exercises like brain emulation are likely to be not very relevant to the discovery of such a paradigm. It will come from inspired high-level thinking, working with a few crucial facts; and then the paradigm will be used to guide the neurohacking—it’s the thing that will allow us to know what we’re doing.
What do you think of Eliezer’s approach to the “meaning” problem in The Simple Truth? I find the claim that the pebble system is about the sheep to be intuitively satisfying.