The main reason is the fuzzy physical ontology of standard computational states, and how that makes them unsuitable as the mereological base for consciousness. When we ascribe a computational state to something like a transistor, we’re not talking about a crisply objective property. The physical criterion for standard computational ontology is functional: if the device performs a certain role reliably enough, then we say it’s in a 0 state, or a 1 state, or whatever. But physically, there are always possible edge states, in which the performance of the computational role is less and less reliable. It’s a kind of sorites problem.
For engineering, the vagueness of edge states doesn’t matter, so long as you prevent them from occurring. Ontology is different. If something has an observer-independent existence, then for all possible states, either it’s there or it’s not. Consciousness must satisfy this criterion, standard computational states cannot, therefore consciousness cannot be founded on standard computational states.
For me, this provides a huge incentive to look for quantum effects in the brain being functionally relevant to cognition and consciousness—because the quantum world introduces different kinds of ontological possibilities. Basically one might look for reservoirs of entanglement, that are coupled to the classical computational processes which form the whole of present-day cognitive neuroscience. Candidates would include various collective modes of photons, electrons, phonons, in cytoplasmic water or polymeric structures like microtubules. I feel like the biggest challenge is to get entanglement on a scale larger than the individual cell; I should look at Michael Levin’s stuff from that perspective some time.
Just showing that entanglement matters at some stage of cognition doesn’t solve my vagueness problem, but it does lead to new mereological possibilities, that appear to be badly needed.
Many worlds is an ontological possibility. I don’t regard it as favored ahead of one-world ontologies. I’m not aware of a fully satisfactory, rigorous, realist ontology, even just for relativistic QFT.
Is there a clash between many worlds and what you quoted?
I was thinking that “either it’s there or it’s not” as applied to a conscious state would imply you don’t think consciousness can be in an entangled state, or something along those lines.
But reading it again, it seem like you are saying consciousness is discontinuous? As in, there are no partially-conscious states? Is that right?
I’m also unaware of a fully satisfactory ontology for relativistic QFT, sadly.
Gradations of consciousness, and the possibility of a continuum between consciousness and non-consciousness, are subtle topics; especially when considered in conjunction with concepts whose physical grounding is vague.
Some of the kinds of vagueness that show up:
Many-worlders who are vague about how many worlds there are. This can lead to vagueness about how many minds there are too.
Sorites-style vagueness about the boundary in physical state space between different computational states, and about exactly which microphysical entities count as part of the relevant physical state.
(An example of a microphysically vague state which is being used to define boundaries, is the adaptation of “Markov blanket” by fans of Friston and the free energy principle.)
I think a properly critical discussion of vagueness and continuity, in the context of the mind-brain relationship, would need to figure out which kinds of vagueness can be tolerated and which cannot; and would also caution against hiding bad vagueness behind good vagueness.
Here I mean that sometimes, if one objects to basing mental ontology on microphysically vague concepts of Everett branch or computational state, one is told that this is OK because there’s vagueness in the mental realm too—e.g. vagueness of a color concept, or vagueness of the boundary between being conscious and being unconscious.
Alternatively, one also hears mystical ideas like “all minds are One” being justified on the grounds that the physical world is supposedly a continuum without objective boundaries.
Sometimes, one ends up having to appeal to very basic facts about the experienced world, like, my experience always has a particular form. I am always having a specific experience, in a way that is unaffected by the referential vagueness of the words or concepts I might use to describe it. Or: I am not having your experience, and you are not having mine, the implication being that there is some kind of objective difference or boundary between us.
To me, those are the considerations that can ultimately decide whether a particular proposed psychophysical vagueness is true, possible, or impossible.
The main reason is the fuzzy physical ontology of standard computational states, and how that makes them unsuitable as the mereological base for consciousness. When we ascribe a computational state to something like a transistor, we’re not talking about a crisply objective property. The physical criterion for standard computational ontology is functional: if the device performs a certain role reliably enough, then we say it’s in a 0 state, or a 1 state, or whatever. But physically, there are always possible edge states, in which the performance of the computational role is less and less reliable. It’s a kind of sorites problem.
For engineering, the vagueness of edge states doesn’t matter, so long as you prevent them from occurring. Ontology is different. If something has an observer-independent existence, then for all possible states, either it’s there or it’s not. Consciousness must satisfy this criterion, standard computational states cannot, therefore consciousness cannot be founded on standard computational states.
For me, this provides a huge incentive to look for quantum effects in the brain being functionally relevant to cognition and consciousness—because the quantum world introduces different kinds of ontological possibilities. Basically one might look for reservoirs of entanglement, that are coupled to the classical computational processes which form the whole of present-day cognitive neuroscience. Candidates would include various collective modes of photons, electrons, phonons, in cytoplasmic water or polymeric structures like microtubules. I feel like the biggest challenge is to get entanglement on a scale larger than the individual cell; I should look at Michael Levin’s stuff from that perspective some time.
Just showing that entanglement matters at some stage of cognition doesn’t solve my vagueness problem, but it does lead to new mereological possibilities, that appear to be badly needed.
Should I infer that you don’t believe in many worlds?
Many worlds is an ontological possibility. I don’t regard it as favored ahead of one-world ontologies. I’m not aware of a fully satisfactory, rigorous, realist ontology, even just for relativistic QFT.
Is there a clash between many worlds and what you quoted?
I was thinking that “either it’s there or it’s not” as applied to a conscious state would imply you don’t think consciousness can be in an entangled state, or something along those lines.
But reading it again, it seem like you are saying consciousness is discontinuous? As in, there are no partially-conscious states? Is that right?
I’m also unaware of a fully satisfactory ontology for relativistic QFT, sadly.
Gradations of consciousness, and the possibility of a continuum between consciousness and non-consciousness, are subtle topics; especially when considered in conjunction with concepts whose physical grounding is vague.
Some of the kinds of vagueness that show up:
Many-worlders who are vague about how many worlds there are. This can lead to vagueness about how many minds there are too.
Sorites-style vagueness about the boundary in physical state space between different computational states, and about exactly which microphysical entities count as part of the relevant physical state.
(An example of a microphysically vague state which is being used to define boundaries, is the adaptation of “Markov blanket” by fans of Friston and the free energy principle.)
I think a properly critical discussion of vagueness and continuity, in the context of the mind-brain relationship, would need to figure out which kinds of vagueness can be tolerated and which cannot; and would also caution against hiding bad vagueness behind good vagueness.
Here I mean that sometimes, if one objects to basing mental ontology on microphysically vague concepts of Everett branch or computational state, one is told that this is OK because there’s vagueness in the mental realm too—e.g. vagueness of a color concept, or vagueness of the boundary between being conscious and being unconscious.
Alternatively, one also hears mystical ideas like “all minds are One” being justified on the grounds that the physical world is supposedly a continuum without objective boundaries.
Sometimes, one ends up having to appeal to very basic facts about the experienced world, like, my experience always has a particular form. I am always having a specific experience, in a way that is unaffected by the referential vagueness of the words or concepts I might use to describe it. Or: I am not having your experience, and you are not having mine, the implication being that there is some kind of objective difference or boundary between us.
To me, those are the considerations that can ultimately decide whether a particular proposed psychophysical vagueness is true, possible, or impossible.