That would amount to convincing me that the experience which is currently happening, is not currently happening; or that an experience which previously happened, did not actually happen.
Why? What’s wrong with an experience happening in another way than you imagine?
This more than anything cries “crackpot” to me; the uncompromising attitude that your opponents’ view must lead to absurdities. Like Christians arguing that without souls, atheists should go on killing sprees all the time.
What’s wrong with an experience happening in another way than you imagine?
You could be talking about ontology here, or you could be talking about phenomenology (and then there is the small overlap where we talk about phenomenological ontology, the ontology of appearances).
An example of an experience happening ontologically in a different way than you imagine, might be a schizophrenic who thinks voices are being beamed into their head by the CIA, when in fact they are an endogenously created hallucination.
An example of an experience happening phenomenologically in a different way than you imagine, might be a court witness who insists quite honestly that they saw the defendant driving the stolen car, but in fact they never really had that experience.
We are talking here about the nature of color experience. I interpret WrongBot to be making a phenomenological claim, that there aren’t actually colors even at the level of experience. Possibly you think the argument is about the causes or “underlying nature” of color experience, e.g. the idea that a color perception is really a neural firing pattern.
If the argument is solely at the level of phenomenology, then there is no need to take seriously the idea that the colors aren’t there. This isn’t a judgment call about an elusive distant event. Colors are right in front of me, every second of my waking life; it would be a sort of madness to deny that.
If the argument is at the level of ontology, then I presume that color perception does indeed have something to do with neural activity. But the colors themselves cannot be identified with movements of ions through neural membranes, or whatever the neurophysical correlate of color is supposed to be, because we already have a physical ontology and it doesn’t contain any such extra property. So either we head in the direction of functionalist dualism, like David Chalmers, or we look for an alternative. My alternative is a monism in which the “Cartesian theater” does exist and can be identified with a single large quantum tensor factor somewhere in the brain. I am not dogmatic about this, there are surely other possibilities, but I do insist that colors exist and that they cannot be monistically identified with collective motions of ions or averages of neural firing rates.
(ETA: Incidentally, I don’t deny the existence and relevance of classical encodings of color properties in the nervous system. It’s just that, on a monistic quantum theory of mind, this isn’t the physical correlate of consciousness; it’s just part of the causal path leading towards the Cartesian theater, where the experience itself is located.)
Do we need a separate understanding for the feeling you get when you see a loved one? Is there a thing separate from the neurons and from the particles of scent that constitutes the true REAL smell? What about the effects of caffeine? There is nothing inherent to that molecule that equates to “alertness” any more than there are “green” atoms. Do you think there is a seperate “alertness” mind-object that interacts with a nonphysical part of coffee? Do you think these things are also unable to be explained by neurons, or do you think colors are different?
Colors are just the most vivid example. Smells and feelings are definitely part of consciousness—that is, part of the same phenomenal gestalt as color—so they are definitely on the same ontological level. A few comments up the thread, I talked about color as a three-dimensional property associated with visual regions. Smell is similarly a sensory quale embedded in a certain way in the overall multimodal sensory gestalt. Feelings are even harder to pin down, they seem to be a complex of bodily sensation, sensations called “moods” that aren’t phenomenally associated with a body region, and even some element of willed intentionality. Alertness itself isn’t a quale, it’s a condition of hyperattentiveness, but it is possible to notice that you are attending intently to things, so alertness is a possible predicate of a reflective judgment made about oneself on the basis of phenomenal evidence. In other words, it’s a conceptual posit made as part of a high-order intentional state.
These discussions are bringing back to me the days when I made a serious attempt to develop a phenomenological ontology. All the zeroth-order objects of an experience were supposed to be part of a “total instantaneous phenomenal state of affairs”, and then you had high-order reflective judgments made on top of that, which themselves could become parts of higher-order judgments. Cognitive scientists and AI theorists do talk about intentionality, but only functionally, not phenomenologically. Even philosophers of consciousness sometimes hesitate to say that intentional states are part of consciousness—they’re happier to focus on sensation, because it’s so obvious, not just that it’s there, but that you know it’s there.
However, it’s also clear, not only that we think, but that we know we are thinking—even if this awareness is partly mediated by a perceptual presentation to oneself of a stream of symbols encoding the thought, such as a subvocalization—and so I definitely say intentionality is part of consciousness, not just sensation. Another way to see this is to notice that we see things as something. There’s a “semantics” to perception, the conceptual ingredient in the phenomenal gestalt. Therefore, it’s not enough to characterize conscious states as simply a blob of sensory quale—colors varying across the visual field, other sense-data varying across the other sensory modalities. The whole thing is infused, even at the level of consciousness, with interpretation and conceptual content. How to express this properly—how to state accurately the ontology of this conceptual infusion into the phenomenal—is another delicate issue, though plenty has been written about it, for example in Kant and Husserl.
So everything that is a part of experience is part of the problem. Experiences have structure (for example, the planar structure of a depthless visual field), concepts have logical structure and conditions of application, thoughts also have a combinatorial structure. The key to computational materialism is a structural and causal isomorphism between the structure of conscious and cognitive states, and the structure of physical and computational states. The problem is that the isomorphism can’t be an identity if we use ordinary physical ontology or even physically coarse-grained computational states in any ontology.
Empirically, we do not know in any very precise way what the brain locus of consciousness is. It’s sort of spread around, the brain contains multiple copies of data… One of the strong reasons for the presumption that speculations about the physical correlate of consciousness being an “exact quantum-tensor-factor state machine” rather than a “coarse-grained synapse-and-ion-gate state machine” are bogus and irrelevant, is the presumption that the physical locus of consciousness is already known to be something like the latter. But it isn’t; that is just a level of analysis that we happen to be comfortable with. The question is still empirically open, one reason why I would hold out hope for a quantum monism, rather than a functionalist dualism, being the answer.
Why? What’s wrong with an experience happening in another way than you imagine? This more than anything cries “crackpot” to me; the uncompromising attitude that your opponents’ view must lead to absurdities. Like Christians arguing that without souls, atheists should go on killing sprees all the time.
You could be talking about ontology here, or you could be talking about phenomenology (and then there is the small overlap where we talk about phenomenological ontology, the ontology of appearances).
An example of an experience happening ontologically in a different way than you imagine, might be a schizophrenic who thinks voices are being beamed into their head by the CIA, when in fact they are an endogenously created hallucination.
An example of an experience happening phenomenologically in a different way than you imagine, might be a court witness who insists quite honestly that they saw the defendant driving the stolen car, but in fact they never really had that experience.
We are talking here about the nature of color experience. I interpret WrongBot to be making a phenomenological claim, that there aren’t actually colors even at the level of experience. Possibly you think the argument is about the causes or “underlying nature” of color experience, e.g. the idea that a color perception is really a neural firing pattern.
If the argument is solely at the level of phenomenology, then there is no need to take seriously the idea that the colors aren’t there. This isn’t a judgment call about an elusive distant event. Colors are right in front of me, every second of my waking life; it would be a sort of madness to deny that.
If the argument is at the level of ontology, then I presume that color perception does indeed have something to do with neural activity. But the colors themselves cannot be identified with movements of ions through neural membranes, or whatever the neurophysical correlate of color is supposed to be, because we already have a physical ontology and it doesn’t contain any such extra property. So either we head in the direction of functionalist dualism, like David Chalmers, or we look for an alternative. My alternative is a monism in which the “Cartesian theater” does exist and can be identified with a single large quantum tensor factor somewhere in the brain. I am not dogmatic about this, there are surely other possibilities, but I do insist that colors exist and that they cannot be monistically identified with collective motions of ions or averages of neural firing rates.
(ETA: Incidentally, I don’t deny the existence and relevance of classical encodings of color properties in the nervous system. It’s just that, on a monistic quantum theory of mind, this isn’t the physical correlate of consciousness; it’s just part of the causal path leading towards the Cartesian theater, where the experience itself is located.)
Do we need a separate understanding for the feeling you get when you see a loved one? Is there a thing separate from the neurons and from the particles of scent that constitutes the true REAL smell? What about the effects of caffeine? There is nothing inherent to that molecule that equates to “alertness” any more than there are “green” atoms. Do you think there is a seperate “alertness” mind-object that interacts with a nonphysical part of coffee? Do you think these things are also unable to be explained by neurons, or do you think colors are different?
Colors are just the most vivid example. Smells and feelings are definitely part of consciousness—that is, part of the same phenomenal gestalt as color—so they are definitely on the same ontological level. A few comments up the thread, I talked about color as a three-dimensional property associated with visual regions. Smell is similarly a sensory quale embedded in a certain way in the overall multimodal sensory gestalt. Feelings are even harder to pin down, they seem to be a complex of bodily sensation, sensations called “moods” that aren’t phenomenally associated with a body region, and even some element of willed intentionality. Alertness itself isn’t a quale, it’s a condition of hyperattentiveness, but it is possible to notice that you are attending intently to things, so alertness is a possible predicate of a reflective judgment made about oneself on the basis of phenomenal evidence. In other words, it’s a conceptual posit made as part of a high-order intentional state.
These discussions are bringing back to me the days when I made a serious attempt to develop a phenomenological ontology. All the zeroth-order objects of an experience were supposed to be part of a “total instantaneous phenomenal state of affairs”, and then you had high-order reflective judgments made on top of that, which themselves could become parts of higher-order judgments. Cognitive scientists and AI theorists do talk about intentionality, but only functionally, not phenomenologically. Even philosophers of consciousness sometimes hesitate to say that intentional states are part of consciousness—they’re happier to focus on sensation, because it’s so obvious, not just that it’s there, but that you know it’s there.
However, it’s also clear, not only that we think, but that we know we are thinking—even if this awareness is partly mediated by a perceptual presentation to oneself of a stream of symbols encoding the thought, such as a subvocalization—and so I definitely say intentionality is part of consciousness, not just sensation. Another way to see this is to notice that we see things as something. There’s a “semantics” to perception, the conceptual ingredient in the phenomenal gestalt. Therefore, it’s not enough to characterize conscious states as simply a blob of sensory quale—colors varying across the visual field, other sense-data varying across the other sensory modalities. The whole thing is infused, even at the level of consciousness, with interpretation and conceptual content. How to express this properly—how to state accurately the ontology of this conceptual infusion into the phenomenal—is another delicate issue, though plenty has been written about it, for example in Kant and Husserl.
So everything that is a part of experience is part of the problem. Experiences have structure (for example, the planar structure of a depthless visual field), concepts have logical structure and conditions of application, thoughts also have a combinatorial structure. The key to computational materialism is a structural and causal isomorphism between the structure of conscious and cognitive states, and the structure of physical and computational states. The problem is that the isomorphism can’t be an identity if we use ordinary physical ontology or even physically coarse-grained computational states in any ontology.
Empirically, we do not know in any very precise way what the brain locus of consciousness is. It’s sort of spread around, the brain contains multiple copies of data… One of the strong reasons for the presumption that speculations about the physical correlate of consciousness being an “exact quantum-tensor-factor state machine” rather than a “coarse-grained synapse-and-ion-gate state machine” are bogus and irrelevant, is the presumption that the physical locus of consciousness is already known to be something like the latter. But it isn’t; that is just a level of analysis that we happen to be comfortable with. The question is still empirically open, one reason why I would hold out hope for a quantum monism, rather than a functionalist dualism, being the answer.