you are coming off (to me) as evasive on this subject, which is why I’m trying so hard to nail down your actual position.
My actual position is that green exists and that it does not exist in standard physical ontology. Particles in standard physical ontology are not green, and nothing made out of them would be green. Is that clear?
One of your propositions talks about explaining “whatever it is that causes people to talk about qualia”. The other talks about explaining “the experience of the color green”. In both cases we talk around the actual existence of colors in an unacceptable way—the first one focuses on words, the second one focuses on “experience of green”, which can provide an opportunity to deny that green per se exists, as seen in the quote produced by Automaton.
First requested clarification: is it your belief that green is an ontologically basic primitive? Or is green composed of other ontologically basic primitives that are outside the standard model?
Second requested clarification: Since my propositions are unacceptable (and that strikes me as a fair criticism, given your arguments) is there any experiment or argument or experience or anything that could convince you that green is not real in the way that you currently believe that it is?
is there any experiment or argument or experience or anything that could convince you that green is not real in the way that you currently believe that it is?
That would amount to convincing me that the experience which is currently happening, is not currently happening; or that an experience which previously happened, did not actually happen.
is it your belief that green is an ontologically basic primitive? Or is green composed of other ontologically basic primitives that are outside the standard model?
The analysis of color as abstractly three-dimensional (e.g. hue, saturation, brightness) seems phenomenologically accurate to me. So as a first approximation to a phenomenological ontology of color, I say that there are regions of color, present to a conscious subject, within which these attributes vary.
If we want to go further, we have to tackle further questions, like what ontological category color belongs to. Is it a property of the conscious subject? Is it a property of a part of the conscious subject? Should we regard sensory continua as part of the perceiving subject, or as separate entities to which the subject has a relation? Should we regard color as a property of a visual region, or should we look at some of the philosophical attempts to collapse the distinction between object and property, and view “colored region” as the basic entity?
What I believe is that consciousness is not a collection of spatial parts. It has parts, but the binding relations are some other sort of relation, like “co-presence to the perceiving subject” or “proximity in the sensory manifolds”. Since color only occurs within the context of some conscious experience, whether or not it’s “ontologically basic” is going to require a lot more clarity about what’s foundational and what’s derivative in ontology, and about the ontological nature of the overall complex or unity or whatever it is, that is the experience as a whole (which in turn is still only going to be part of or an aspect of the total state of a self).
Clearly this isn’t in the standard model of physics in any standard sense. But if the standard model can be expressed—for example, and only as an example, as an evolving tensor network of a particular sort—then it may be possible to specify an ontology beneath the tensor formalism, in which this complex ontological object, the conscious being, can be identified with one of the very high-dimensional tensor factors appearing in the theory.
The correct way to state the nature of color and its relationship to its context in consciousness is a very delicate question. We may need entirely new categories—by categories I mean classic ontological categories like substance, property, relation. Though there has already been, in the history of human thought, a lot of underrated conceptual invention which might turn out to be relevant.
That would amount to convincing me that the experience which is currently happening, is not currently happening; or that an experience which previously happened, did not actually happen.
Why? What’s wrong with an experience happening in another way than you imagine?
This more than anything cries “crackpot” to me; the uncompromising attitude that your opponents’ view must lead to absurdities. Like Christians arguing that without souls, atheists should go on killing sprees all the time.
What’s wrong with an experience happening in another way than you imagine?
You could be talking about ontology here, or you could be talking about phenomenology (and then there is the small overlap where we talk about phenomenological ontology, the ontology of appearances).
An example of an experience happening ontologically in a different way than you imagine, might be a schizophrenic who thinks voices are being beamed into their head by the CIA, when in fact they are an endogenously created hallucination.
An example of an experience happening phenomenologically in a different way than you imagine, might be a court witness who insists quite honestly that they saw the defendant driving the stolen car, but in fact they never really had that experience.
We are talking here about the nature of color experience. I interpret WrongBot to be making a phenomenological claim, that there aren’t actually colors even at the level of experience. Possibly you think the argument is about the causes or “underlying nature” of color experience, e.g. the idea that a color perception is really a neural firing pattern.
If the argument is solely at the level of phenomenology, then there is no need to take seriously the idea that the colors aren’t there. This isn’t a judgment call about an elusive distant event. Colors are right in front of me, every second of my waking life; it would be a sort of madness to deny that.
If the argument is at the level of ontology, then I presume that color perception does indeed have something to do with neural activity. But the colors themselves cannot be identified with movements of ions through neural membranes, or whatever the neurophysical correlate of color is supposed to be, because we already have a physical ontology and it doesn’t contain any such extra property. So either we head in the direction of functionalist dualism, like David Chalmers, or we look for an alternative. My alternative is a monism in which the “Cartesian theater” does exist and can be identified with a single large quantum tensor factor somewhere in the brain. I am not dogmatic about this, there are surely other possibilities, but I do insist that colors exist and that they cannot be monistically identified with collective motions of ions or averages of neural firing rates.
(ETA: Incidentally, I don’t deny the existence and relevance of classical encodings of color properties in the nervous system. It’s just that, on a monistic quantum theory of mind, this isn’t the physical correlate of consciousness; it’s just part of the causal path leading towards the Cartesian theater, where the experience itself is located.)
Do we need a separate understanding for the feeling you get when you see a loved one? Is there a thing separate from the neurons and from the particles of scent that constitutes the true REAL smell? What about the effects of caffeine? There is nothing inherent to that molecule that equates to “alertness” any more than there are “green” atoms. Do you think there is a seperate “alertness” mind-object that interacts with a nonphysical part of coffee? Do you think these things are also unable to be explained by neurons, or do you think colors are different?
Colors are just the most vivid example. Smells and feelings are definitely part of consciousness—that is, part of the same phenomenal gestalt as color—so they are definitely on the same ontological level. A few comments up the thread, I talked about color as a three-dimensional property associated with visual regions. Smell is similarly a sensory quale embedded in a certain way in the overall multimodal sensory gestalt. Feelings are even harder to pin down, they seem to be a complex of bodily sensation, sensations called “moods” that aren’t phenomenally associated with a body region, and even some element of willed intentionality. Alertness itself isn’t a quale, it’s a condition of hyperattentiveness, but it is possible to notice that you are attending intently to things, so alertness is a possible predicate of a reflective judgment made about oneself on the basis of phenomenal evidence. In other words, it’s a conceptual posit made as part of a high-order intentional state.
These discussions are bringing back to me the days when I made a serious attempt to develop a phenomenological ontology. All the zeroth-order objects of an experience were supposed to be part of a “total instantaneous phenomenal state of affairs”, and then you had high-order reflective judgments made on top of that, which themselves could become parts of higher-order judgments. Cognitive scientists and AI theorists do talk about intentionality, but only functionally, not phenomenologically. Even philosophers of consciousness sometimes hesitate to say that intentional states are part of consciousness—they’re happier to focus on sensation, because it’s so obvious, not just that it’s there, but that you know it’s there.
However, it’s also clear, not only that we think, but that we know we are thinking—even if this awareness is partly mediated by a perceptual presentation to oneself of a stream of symbols encoding the thought, such as a subvocalization—and so I definitely say intentionality is part of consciousness, not just sensation. Another way to see this is to notice that we see things as something. There’s a “semantics” to perception, the conceptual ingredient in the phenomenal gestalt. Therefore, it’s not enough to characterize conscious states as simply a blob of sensory quale—colors varying across the visual field, other sense-data varying across the other sensory modalities. The whole thing is infused, even at the level of consciousness, with interpretation and conceptual content. How to express this properly—how to state accurately the ontology of this conceptual infusion into the phenomenal—is another delicate issue, though plenty has been written about it, for example in Kant and Husserl.
So everything that is a part of experience is part of the problem. Experiences have structure (for example, the planar structure of a depthless visual field), concepts have logical structure and conditions of application, thoughts also have a combinatorial structure. The key to computational materialism is a structural and causal isomorphism between the structure of conscious and cognitive states, and the structure of physical and computational states. The problem is that the isomorphism can’t be an identity if we use ordinary physical ontology or even physically coarse-grained computational states in any ontology.
Empirically, we do not know in any very precise way what the brain locus of consciousness is. It’s sort of spread around, the brain contains multiple copies of data… One of the strong reasons for the presumption that speculations about the physical correlate of consciousness being an “exact quantum-tensor-factor state machine” rather than a “coarse-grained synapse-and-ion-gate state machine” are bogus and irrelevant, is the presumption that the physical locus of consciousness is already known to be something like the latter. But it isn’t; that is just a level of analysis that we happen to be comfortable with. The question is still empirically open, one reason why I would hold out hope for a quantum monism, rather than a functionalist dualism, being the answer.
Alright, I got my answer, I’m done. So far as I can tell, you’re smart, willing to put the work in, and your intentions are good. It’s really too bad to hear that you’ve accepted the crackpot offer. I wish it were otherwise.
My actual position is that green exists and that it does not exist in standard physical ontology. Particles in standard physical ontology are not green, and nothing made out of them would be green. Is that clear?
One of your propositions talks about explaining “whatever it is that causes people to talk about qualia”. The other talks about explaining “the experience of the color green”. In both cases we talk around the actual existence of colors in an unacceptable way—the first one focuses on words, the second one focuses on “experience of green”, which can provide an opportunity to deny that green per se exists, as seen in the quote produced by Automaton.
First requested clarification: is it your belief that green is an ontologically basic primitive? Or is green composed of other ontologically basic primitives that are outside the standard model?
Second requested clarification: Since my propositions are unacceptable (and that strikes me as a fair criticism, given your arguments) is there any experiment or argument or experience or anything that could convince you that green is not real in the way that you currently believe that it is?
I’ll start with the second question.
That would amount to convincing me that the experience which is currently happening, is not currently happening; or that an experience which previously happened, did not actually happen.
The analysis of color as abstractly three-dimensional (e.g. hue, saturation, brightness) seems phenomenologically accurate to me. So as a first approximation to a phenomenological ontology of color, I say that there are regions of color, present to a conscious subject, within which these attributes vary.
If we want to go further, we have to tackle further questions, like what ontological category color belongs to. Is it a property of the conscious subject? Is it a property of a part of the conscious subject? Should we regard sensory continua as part of the perceiving subject, or as separate entities to which the subject has a relation? Should we regard color as a property of a visual region, or should we look at some of the philosophical attempts to collapse the distinction between object and property, and view “colored region” as the basic entity?
What I believe is that consciousness is not a collection of spatial parts. It has parts, but the binding relations are some other sort of relation, like “co-presence to the perceiving subject” or “proximity in the sensory manifolds”. Since color only occurs within the context of some conscious experience, whether or not it’s “ontologically basic” is going to require a lot more clarity about what’s foundational and what’s derivative in ontology, and about the ontological nature of the overall complex or unity or whatever it is, that is the experience as a whole (which in turn is still only going to be part of or an aspect of the total state of a self).
Clearly this isn’t in the standard model of physics in any standard sense. But if the standard model can be expressed—for example, and only as an example, as an evolving tensor network of a particular sort—then it may be possible to specify an ontology beneath the tensor formalism, in which this complex ontological object, the conscious being, can be identified with one of the very high-dimensional tensor factors appearing in the theory.
The correct way to state the nature of color and its relationship to its context in consciousness is a very delicate question. We may need entirely new categories—by categories I mean classic ontological categories like substance, property, relation. Though there has already been, in the history of human thought, a lot of underrated conceptual invention which might turn out to be relevant.
Why? What’s wrong with an experience happening in another way than you imagine? This more than anything cries “crackpot” to me; the uncompromising attitude that your opponents’ view must lead to absurdities. Like Christians arguing that without souls, atheists should go on killing sprees all the time.
You could be talking about ontology here, or you could be talking about phenomenology (and then there is the small overlap where we talk about phenomenological ontology, the ontology of appearances).
An example of an experience happening ontologically in a different way than you imagine, might be a schizophrenic who thinks voices are being beamed into their head by the CIA, when in fact they are an endogenously created hallucination.
An example of an experience happening phenomenologically in a different way than you imagine, might be a court witness who insists quite honestly that they saw the defendant driving the stolen car, but in fact they never really had that experience.
We are talking here about the nature of color experience. I interpret WrongBot to be making a phenomenological claim, that there aren’t actually colors even at the level of experience. Possibly you think the argument is about the causes or “underlying nature” of color experience, e.g. the idea that a color perception is really a neural firing pattern.
If the argument is solely at the level of phenomenology, then there is no need to take seriously the idea that the colors aren’t there. This isn’t a judgment call about an elusive distant event. Colors are right in front of me, every second of my waking life; it would be a sort of madness to deny that.
If the argument is at the level of ontology, then I presume that color perception does indeed have something to do with neural activity. But the colors themselves cannot be identified with movements of ions through neural membranes, or whatever the neurophysical correlate of color is supposed to be, because we already have a physical ontology and it doesn’t contain any such extra property. So either we head in the direction of functionalist dualism, like David Chalmers, or we look for an alternative. My alternative is a monism in which the “Cartesian theater” does exist and can be identified with a single large quantum tensor factor somewhere in the brain. I am not dogmatic about this, there are surely other possibilities, but I do insist that colors exist and that they cannot be monistically identified with collective motions of ions or averages of neural firing rates.
(ETA: Incidentally, I don’t deny the existence and relevance of classical encodings of color properties in the nervous system. It’s just that, on a monistic quantum theory of mind, this isn’t the physical correlate of consciousness; it’s just part of the causal path leading towards the Cartesian theater, where the experience itself is located.)
Do we need a separate understanding for the feeling you get when you see a loved one? Is there a thing separate from the neurons and from the particles of scent that constitutes the true REAL smell? What about the effects of caffeine? There is nothing inherent to that molecule that equates to “alertness” any more than there are “green” atoms. Do you think there is a seperate “alertness” mind-object that interacts with a nonphysical part of coffee? Do you think these things are also unable to be explained by neurons, or do you think colors are different?
Colors are just the most vivid example. Smells and feelings are definitely part of consciousness—that is, part of the same phenomenal gestalt as color—so they are definitely on the same ontological level. A few comments up the thread, I talked about color as a three-dimensional property associated with visual regions. Smell is similarly a sensory quale embedded in a certain way in the overall multimodal sensory gestalt. Feelings are even harder to pin down, they seem to be a complex of bodily sensation, sensations called “moods” that aren’t phenomenally associated with a body region, and even some element of willed intentionality. Alertness itself isn’t a quale, it’s a condition of hyperattentiveness, but it is possible to notice that you are attending intently to things, so alertness is a possible predicate of a reflective judgment made about oneself on the basis of phenomenal evidence. In other words, it’s a conceptual posit made as part of a high-order intentional state.
These discussions are bringing back to me the days when I made a serious attempt to develop a phenomenological ontology. All the zeroth-order objects of an experience were supposed to be part of a “total instantaneous phenomenal state of affairs”, and then you had high-order reflective judgments made on top of that, which themselves could become parts of higher-order judgments. Cognitive scientists and AI theorists do talk about intentionality, but only functionally, not phenomenologically. Even philosophers of consciousness sometimes hesitate to say that intentional states are part of consciousness—they’re happier to focus on sensation, because it’s so obvious, not just that it’s there, but that you know it’s there.
However, it’s also clear, not only that we think, but that we know we are thinking—even if this awareness is partly mediated by a perceptual presentation to oneself of a stream of symbols encoding the thought, such as a subvocalization—and so I definitely say intentionality is part of consciousness, not just sensation. Another way to see this is to notice that we see things as something. There’s a “semantics” to perception, the conceptual ingredient in the phenomenal gestalt. Therefore, it’s not enough to characterize conscious states as simply a blob of sensory quale—colors varying across the visual field, other sense-data varying across the other sensory modalities. The whole thing is infused, even at the level of consciousness, with interpretation and conceptual content. How to express this properly—how to state accurately the ontology of this conceptual infusion into the phenomenal—is another delicate issue, though plenty has been written about it, for example in Kant and Husserl.
So everything that is a part of experience is part of the problem. Experiences have structure (for example, the planar structure of a depthless visual field), concepts have logical structure and conditions of application, thoughts also have a combinatorial structure. The key to computational materialism is a structural and causal isomorphism between the structure of conscious and cognitive states, and the structure of physical and computational states. The problem is that the isomorphism can’t be an identity if we use ordinary physical ontology or even physically coarse-grained computational states in any ontology.
Empirically, we do not know in any very precise way what the brain locus of consciousness is. It’s sort of spread around, the brain contains multiple copies of data… One of the strong reasons for the presumption that speculations about the physical correlate of consciousness being an “exact quantum-tensor-factor state machine” rather than a “coarse-grained synapse-and-ion-gate state machine” are bogus and irrelevant, is the presumption that the physical locus of consciousness is already known to be something like the latter. But it isn’t; that is just a level of analysis that we happen to be comfortable with. The question is still empirically open, one reason why I would hold out hope for a quantum monism, rather than a functionalist dualism, being the answer.
Alright, I got my answer, I’m done. So far as I can tell, you’re smart, willing to put the work in, and your intentions are good. It’s really too bad to hear that you’ve accepted the crackpot offer. I wish it were otherwise.