My basic point was just that, if consciousness is only a property of a specific physical entity (e.g. a long knotted loop of planck-flux), and if your artificial brain doesn’t contain any of those (e.g. it is made entirely of short trivial loops of planck-flux), then it won’t be conscious, even if it simulates such an entity.
I will address your questions in a moment, but first I want to put this discussion back in context.
Qualia are part of reality, but they are not part of our current physical theory. Therefore, if we are going to talk about them at all, while focusing on brains, there is going to be some sort of dualism. In this discussion, there are two types of property dualism under consideration.
According to one, qualia, and conscious states generally, are correlated with computational states which are coarse-grainings of the microphysical details of the brain. Coarse-graining means that the vast majority of those details do not matter for the definition of the computational state.
According to the other sort of theory, which I have been advocating, qualia and conscious states map to some exact combination of exact microphysical properties. The knotted loop of planck-flux, winding through the graviton weave in the vicinity of important neurons, etc., has been introduced to make this option concrete.
My actual opinion is that neither of these is likely to be correct, but that the second should be closer to the truth than the first. I would like to get away from property dualism entirely, but it will be hard to do that if the physical correlate of consciousness is a coarse-grained computational state, because there is already a sort of dualism built into that concept—a dualism between the exact microphysical state and the coarse-grained state. These coarse-grained states are conceptual constructs, equivalence classes that are vague at the edges and with no
prospect of being made exact in a nonarbitrary way, so are they just intrinsically unpromising as an ontological substrate for consciousness. I’m not arguing with the validity of computational neuroscience and coarse-grained causal analysis, I’m just saying it’s not the whole story. When we get to the truth about mind and matter, it’s going to be more new-age than it is cyberpunk, more organic than it is algorithmic, more physical than it is virtual. You can’t create consciousness just by pushing bits around, it’s something far more embedded in the substance of reality. That’s my “prediction”.
Now back to your comment. You say, if consciousness—and conscious cognition—really depends on some exotic quantum entity woven through the familiar neurons, shouldn’t progressive replacement of biological neurons with non-quantum prostheses lead to a contraction of conscious experience and an observable alteration and impairment of behavior, as the substitution progresses? I agree that this is a reasonable expectation, if you have in mind Hans Moravec’s specific scenario, in which neurons are being replaced one at a time and while the subject is intellectually active and interacting with their environment.
Whether Moravec’s scenario is itself reasonable is another thing. There are about 30 million seconds in a year and there are billions of neurons just in the cortex alone. The cortical neurons are very entangled with each
other via their axons. It would be very remarkable if a real procedure of whole-brain neural substitution didn’t involve periods of functional impairment, as major modules of the brain are removed and then replaced with prosthesis.
I also find it very unlikely that attempting a Moravec procedure of neuronal replacement, and seeing what happens, will be important as a test of such rival paradigms of consciousness. I suppose you’re thinking in terms of a hypothetical computational theory of neurons whose advocates consider it good enough to serve as the basis of a Moravec procedure, versus skeptics who think that something is being left out of the model.
But inserting functional replacements for individual cortical neurons in vivo will require very advanced technology. For people wishing to conduct experiments in mind emulation, it will be much easier to employ the freeze-slice-and-scan paradigm currently contemplated for C. elegans, plus state-machine models from functional imaging for brain regions where function really is coarser in its implementation. Meanwhile, on the quantum side, while there certainly need to be radical advances in the application of concepts from condensed-matter physics to living matter, if the hypothesized quantum aspects of neuronal function are to be located… I think the really big advances that are required, must be relatively simple. Alien to our current understandings, which is why they are hard to attain, but nonetheless simple, in the way that the defining concepts of physics are simple.
There ought to be a physical-ontological paradigm which simultaneously (1) explains the reality behind some theory-of-everything mathematical formalism (2) explains how a particular class of entities from the theory can be understood as conscious entities (3) makes it clear how a physical system like the human brain could contain one such entity with the known complexity of human consciousness. Because it has to forge a deep connection between two separate spheres of human knowledge—natural science and phenomenology of consciousness—new basic principles are needed, not just technical elaborations of known ways of thinking. So neurohacking exercises like brain emulation are likely to be not very relevant to the discovery of such a paradigm. It will come from inspired high-level thinking, working with a few crucial facts; and then the paradigm will be used to guide the neurohacking—it’s the thing that will allow us to know what we’re doing.
My basic point was just that, if consciousness is only a property of a specific physical entity (e.g. a long knotted loop of planck-flux), and if your artificial brain doesn’t contain any of those (e.g. it is made entirely of short trivial loops of planck-flux), then it won’t be conscious, even if it simulates such an entity.
I will address your questions in a moment, but first I want to put this discussion back in context.
Qualia are part of reality, but they are not part of our current physical theory. Therefore, if we are going to talk about them at all, while focusing on brains, there is going to be some sort of dualism. In this discussion, there are two types of property dualism under consideration.
According to one, qualia, and conscious states generally, are correlated with computational states which are coarse-grainings of the microphysical details of the brain. Coarse-graining means that the vast majority of those details do not matter for the definition of the computational state.
According to the other sort of theory, which I have been advocating, qualia and conscious states map to some exact combination of exact microphysical properties. The knotted loop of planck-flux, winding through the graviton weave in the vicinity of important neurons, etc., has been introduced to make this option concrete.
My actual opinion is that neither of these is likely to be correct, but that the second should be closer to the truth than the first. I would like to get away from property dualism entirely, but it will be hard to do that if the physical correlate of consciousness is a coarse-grained computational state, because there is already a sort of dualism built into that concept—a dualism between the exact microphysical state and the coarse-grained state. These coarse-grained states are conceptual constructs, equivalence classes that are vague at the edges and with no prospect of being made exact in a nonarbitrary way, so are they just intrinsically unpromising as an ontological substrate for consciousness. I’m not arguing with the validity of computational neuroscience and coarse-grained causal analysis, I’m just saying it’s not the whole story. When we get to the truth about mind and matter, it’s going to be more new-age than it is cyberpunk, more organic than it is algorithmic, more physical than it is virtual. You can’t create consciousness just by pushing bits around, it’s something far more embedded in the substance of reality. That’s my “prediction”.
Now back to your comment. You say, if consciousness—and conscious cognition—really depends on some exotic quantum entity woven through the familiar neurons, shouldn’t progressive replacement of biological neurons with non-quantum prostheses lead to a contraction of conscious experience and an observable alteration and impairment of behavior, as the substitution progresses? I agree that this is a reasonable expectation, if you have in mind Hans Moravec’s specific scenario, in which neurons are being replaced one at a time and while the subject is intellectually active and interacting with their environment.
Whether Moravec’s scenario is itself reasonable is another thing. There are about 30 million seconds in a year and there are billions of neurons just in the cortex alone. The cortical neurons are very entangled with each other via their axons. It would be very remarkable if a real procedure of whole-brain neural substitution didn’t involve periods of functional impairment, as major modules of the brain are removed and then replaced with prosthesis.
I also find it very unlikely that attempting a Moravec procedure of neuronal replacement, and seeing what happens, will be important as a test of such rival paradigms of consciousness. I suppose you’re thinking in terms of a hypothetical computational theory of neurons whose advocates consider it good enough to serve as the basis of a Moravec procedure, versus skeptics who think that something is being left out of the model.
But inserting functional replacements for individual cortical neurons in vivo will require very advanced technology. For people wishing to conduct experiments in mind emulation, it will be much easier to employ the freeze-slice-and-scan paradigm currently contemplated for C. elegans, plus state-machine models from functional imaging for brain regions where function really is coarser in its implementation. Meanwhile, on the quantum side, while there certainly need to be radical advances in the application of concepts from condensed-matter physics to living matter, if the hypothesized quantum aspects of neuronal function are to be located… I think the really big advances that are required, must be relatively simple. Alien to our current understandings, which is why they are hard to attain, but nonetheless simple, in the way that the defining concepts of physics are simple.
There ought to be a physical-ontological paradigm which simultaneously (1) explains the reality behind some theory-of-everything mathematical formalism (2) explains how a particular class of entities from the theory can be understood as conscious entities (3) makes it clear how a physical system like the human brain could contain one such entity with the known complexity of human consciousness. Because it has to forge a deep connection between two separate spheres of human knowledge—natural science and phenomenology of consciousness—new basic principles are needed, not just technical elaborations of known ways of thinking. So neurohacking exercises like brain emulation are likely to be not very relevant to the discovery of such a paradigm. It will come from inspired high-level thinking, working with a few crucial facts; and then the paradigm will be used to guide the neurohacking—it’s the thing that will allow us to know what we’re doing.