I disagree that colors do not exist in standard physical ontology, and find the claim rather absurd on its face. (I’m not entirely sure what ontology is, but I think I’ve picked up the meaning from context.)
I don’t know every last detail of how the experience of color is created by the interaction of of light waves, eyes, and neurons, but I know that that’s where it comes from.
An ontology is a theory about what it is that exists. I have to speak of “physical ontology” and not just of physics, because so many physicists take an anti-ontological or positivistic attitude, and say that physical theory just has to produce numbers which match the numbers coming from experiment; it doesn’t have to be a theory about what it is that exists. And by standard physical ontology I mean one which is based on what Galileo called primary properties, possibly with some admixture of new concepts from contemporary mathematics, but definitely excluding the so-called secondary properties.
So a standard physical ontology may include time, space, and objects in space, and the objects will have size, shape, and location, and then they may have a variety of abstract quantitative properties on top of that, but they don’t have color, sound, or any of those “feels” which get filed under qualia.
I don’t know every last detail of how the experience of color is created by the interaction of light waves, eyes, and neurons, but I know that that’s where it comes from.
Asking “where is the experienced color in the physical brain?” shows the hidden problem here . We know from experience that reality includes things that are actually green, namely certain parts of experiences. If we insist that everything is physical, then that means that experiences and their parts are also physical entities of some kind. If the actually green part of an experience is a physical entity, then there must be a physical entity which is actually green.
For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location—the property of always being at some point in space—and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn’t actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles—e.g. “location of center of mass” or “having a part at location x0 and another part at x1″. We can even extend to counterfactual properties, e.g. “the property of flying apart if a heavy third particle were to fly past on a certain trajectory”.
To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that’s a little absurd. It is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue. The properties that are intrinsically available in standard physical ontology are much like arithmetic properties, but with a few additional “physical” predicates that can also enter into the definition.
I presume that most modern people don’t consider linguistic behaviorism an adequate account of anything to do with consciousness. Linguistic behaviorism is where you say there are no “minds” or “psychological states”, there are just bodies that speak. It’s the classic case of accounting for experience by only accounting for what people say about experience.
Cognitive theories of consciousness are considered an advance on this because they introduce a causal model with highly structured internal states which have a structural similarity to conscious states. We see the capacity of neurons to encode information e.g. in spiking rates, we see that there are regions of cortex to which visual input is mapped point by point, and so we say, maybe the visual experience of a field of color is the same thing as a sheet of visual neurons spiking at different rates.
But I claim they can’t be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.
Here I say there are two choices. Either you say that on top of the primary properties out of which standard physical ontology is built, there are secondary properties, like actual green, which are the building blocks of conscious experiences, and you say that the experiences dualistically accompany the causally isomorphic physical processes. Or you say that somewhere there is a physical object which is genuinely identical to the conscious experience—it is the experience—and you say that these neuronal sheets which behave like the parts of an experience still aren’t the thing itself, they are just another stage in the processing of input (think of the many anatomical stages to the pathways that begin at the optic nerve and lead onward into the brain).
There are two peculiarities to this second option. First, haven’t we already argued that the base properties available in physical ontology, considered either singly or in conjunction, just can’t be identified with the constituent properties of conscious states? How does positing this new object help, if it is indeed a physical object? And second, doesn’t it sound like a soul—something that’s not a network of neurons, but a single thing; the single place where the whole experience is localized?
I propose to deal with the second peculiarity by employing a quantum ontology in which entanglement is seen as creating complex single objects (and not just correlated behaviors in several objects which remain ontologically distinct), and with the first peculiarity by saying that, yes, the properties which make up a conscious state are elementary physical properties, and noting that we know nothing about the intrinsic character of elementary physical properties, only their causal and structural relations to each other (so there’s no reason why the elementary internal properties of an entangled system can’t literally and directly be the qualia). I take the structure of a conscious state and say, that is the structure of some complex but elementary entity—not the structure of a collective behavior (as when we talk about the state of a neuron as “firing” or “not firing”, a description which passes over the intricate microscopic detail of the exact detailed state).
The rationale of this move is that identifying the conscious state machine with a state machine based on averaged collective behaviors is really what leads to dualism. If we are instead dealing with the states of an entity which is complex but “fundamental”, in the sense of being defined in terms of the bottom level of physical description (e.g. the Hilbert spaces of these entangled systems), then it’s not a virtual machine.
Maybe that’s the key concept in order to get this across to computer scientists: the idea is that consciousness is not a virtual state machine, it’s a state machine at the “bottom level of implementation”. If consciousness is a virtual state machine—so I argue—then you have dualism, because the states of the state machine of consciousness have to have a reality which the states of a virtual machine don’t intrinsically have.
If you are just making a causal model of something, there’s no necessity for the states of your model to correspond to anything more than averaged behaviors and averaged properties of the real system you’re modeling. But consciousness isn’t just a model or a posited concept, it is a thing in itself, a definite reality. States of consciousness must exist in the true ontology, they can’t just be heuristic approximate concepts. So the choice comes down to: conscious states are dualistically correlated with the states of a virtual state machine, or conscious states are the physical states of some complex but elementary physical entity. I take the latter option and posit that it is some entangled subsystem of the brain with a large but finite number of elementary degrees of freedom. This would be the real physical locus of consciousness, the self, and you; it’s the “Cartesian theater” where diverse sensory information all shows up within the same conscious experience, and it is the locus of conscious agency, the internally generated aspect of its state transitions being what we experience as will.
(That is, the experience of willing is awareness of a certain type of causality taking place. I’m not saying that the will is a quale; the will is just the self in its causal role, and there are “qualia of the will” which constitute the experience of having a will, and they result from reflective awareness of the self’s causal role and causal power… Or at least, these are my private speculations. )
I’ll guess that my prose got a little difficult again towards the end, but that’s how it will be when we try to discuss consciousness in itself as an ontological entity. But hopefully the road towards the dilemma between dualism and quantum monism is a little clearer now.
For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location—the property of always being at some point in space—and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn’t actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles—e.g. “location of center of mass” or “having a part at location x0 and another part at x1″. We can even extend to counterfactual properties, e.g. “the property of flying apart if a heavy third particle were to fly past on a certain trajectory”.
To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that’s a little absurd.
Well, it sounds quite reasonable to me to say that if you arrange elementary particles in a certain, complicated way, you get an instance of something that experiences greenness. To me, this is no different than saying that that if you arrange particles in a certain, complicated way, you get a diamond. We just happen to know a lot more about what particle configurations create “diamondness” than “experience of green”ness. (As a matter of fact, we know exactly how to define “diamondness” as a function of particle type and arrangement.)
So, at this point I apply the Socratic method...
Are we in agreement that a “diamond” is a thing that exists? (My answer: Yes—we can recognize diamonds when we see them.)
Is the property “is a diamond” one that can be defined in terms of “quantitative and logical conjunctions of the properties of individual particles”? (My answer: Yes, because we know that diamonds are made of carbon atoms arranged in a specific pattern.)
Hopefully we agree on these answers! And if we do, can you tell me what the difference is between the predicate “is experiencing greenness” and “is a diamond” such that we can tell, in the real world, if something is a diamond by looking at the particles that make it up, and that it is impossible, in principle, to do the same for “is experiencing greenness”?
What I think your mistake is, is that you underestimate the scope of just what “quantitative and logical conjunctions of the properties of individual particles” can actually describe. Which is, literally, anything at all that can be described with mathematics, assuming you’re allowing all the standard operators of predicate logic and of arithmetic. And that would include the function that maps “arrangements of particles” as an input and returns “true” if the arrangement of particles included a brain that was experiencing green and “false” otherwise—even though we humans don’t actually know what that function is!
But I claim they can’t be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.
To sum up, I assert that you are mistaken when you say that there is is an ontological mismatch—the sheet of neurons does indeed contain the experience of green. You are literally making the exact same error that Eliezer’s strawman makes in Angry Atoms.
By talking about “experience of green”, “experiencing greenness”, etc, you get to dodge the question of whether greenness itself is there or not. Do you agree that there is something in reality that is actually green, namely, certain parts of experiences? Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?
Do you agree that there is something in reality that is actually green, namely, certain parts of experiences?
No. Why do you believe there is? Because you seem to experience green? Since greenness is ontologically anomalous, what reason is there to think the experience isn’t illusion?
Well, I’m used to using the word “green” to describe objects that reflect certain wavelengths of light (which are interpreted in a certain way by the human visual system) and not experiences. As in, “This apple is green” or “I see something that looks green.” Which is why I used the expression “experience of greenness”, because that’s the best translation I can think of for what you’re saying into CronoDAS-English.
So the question
Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?
seems like a fallacy of equivocation to me, or possibly a fallacy of composition. It feels odd to me to say that a brain is green—after all, they don’t look green when you’re cutting open a skull to see what’s inside of it. If “green” in Mitchell-Porter-English means the same thing as “experiences the sensation of greenness” does in CronoDAS-English, then yes, I’ll definitely say that the set of particular physical entities in question possesses the property “green”, even though the same can’t be said of the individual point-particles which make up that collection.
(This kind of word-wrangling is another reason why I tried to stay out of this discussion in the past… trying to make sure we mean the same thing when we talk to each other can take a lot of effort.)
I’m used to using the word “green” to describe objects that reflect certain wavelengths of light (which are interpreted in a certain way by the human visual system)
But you would have been using the word “green” before you knew about wavelengths of light, or had the idea that your experiences were somehow the product of your brain. Green originally denotes a very basic phenomenon, a type of color. As a child you may have been a “naive realist”, thinking that what you see is the world itself. Now you think of your experience as something in your brain, with causes outside the brain. But the experience itself has not changed. In particular, green things are still actually green, even if they are now understood as “part of an experience that is inside one’s brain” rather than “part of the world outside one’s body”.
“Interpretation” is too abstract a word to describe something as concrete as color. It provides yet another way to dodge the reality of color itself. You don’t say that the act of falling over is an “interpretation” of being in the Earth’s gravitation field. The green experiences are green, they’re not just “interpreted as green”.
It feels odd to me to say that a brain is green—after all, they don’t look green when you’re cutting open a skull to see what’s inside of it.
Since we are assuming that our experiences are parts of our brains, this would be the wrong way to think about it anyway. Your experience of anything, including cutting open someone else’s skull, is supposed to be an object inside your own brain, and any properties of that experience are properties of part of your own brain. You won’t see the color in another brain by looking at it. But somehow, you see the color in your own brain by being it.
If “green” in Mitchell-Porter-English means the same thing as “experiences the sensation of greenness” does in CronoDAS-English
The latter expression again pushes away the real issue—is there such a thing as actual greenness or not. We earlier had some quotes from an Australian philosopher, JJC Smart, who would say there are “experiences of green”, but there’s no actual green. He says this because he’s a materialist, so he believes that all there is in reality is just neurons doing their thing, and he knows that standard physical ontology doesn’t contain anything like actual green. He has to deny the reality of one of the most obviously real things there is, but, at least he takes a stand.
On the other hand, someone else who talks about “experiences of green” might decide that what they mean is exactly the same thing as they would have meant by green, when they were a child and a direct realist. Talking about experience in this case is just a way to emphasize the adult understanding of what it is that one directly experiences—parts of your own brain, rather than objects outside it. But independent of this attitude, you still face a choice: will you say that yes, green is there in the same way it ever was, or will you say that it just can’t be, because physics is true and physics contains no such thing as “actual green”?
Lot of words there… I hope I’m understanding better.
But independent of this attitude, you still face a choice: will you say that yes, green is there in the same way it ever was, or will you say that it just can’t be, because physics is true and physics contains no such thing as “actual green”?
This is what I’ve been trying to say: “Green” exists, and “green” is also present (indirectly) in physics. (I think.)
Not one of the fundamental properties, but definable in terms of them.
In other words, present in the same way “diamond” is—there’s no property “green” in the fundamental equations of physics, but it “emerges” from them, or can (in principle) be defined in terms of them. (I’m embarrassed to use the word “emergent”, but, well...)
To use an analogy, there’s no mention of “even numbers” in the axioms of Peano Arithmetic or in first order logic, but S(S(0)) is still even; evenness is present indirectly within Peano Arithmetic. You can talk about even numbers within Peano Arithmetic by writing a formula fragment that is true of all even numbers and false for all other numbers, and using that as your “definition” of even. (It would be something like “Ǝy(S(S(0))y) = x)”.) If I understand correctly, “standard physical ontology” is also a formal system, so the exact same trick should work for talking about concepts such as “diamond” or “green”—we just don’t happen to know (yet) how to define “green” the same way we can define “diamond” or “even”, but I’m pretty sure that, in principle, there is a way to do it.
is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue.
Let’s compare the plausibility of getting colors out of combinations of the elementary properties in standard physical ontology, and the plausibility of getting colors out of Peano Arithmetic. I think the two cases are quite similar. In both cases you have an infinite tower of increasingly complex conjunctive (etc) properties that can be defined in terms of an ontological base, but getting to color just from arithmetic or just from points arranged in space is asking for magic. (Whereas getting a diamond from points arranged in space is not problematic.)
There are quantifiable things you can say about subjective color, for example its three-dimensionality (hue, saturation, brightness). The color state of a visual region can be represented by a mapping from the region (as a two-dimensional set of points) into three-dimensional color space. So there ought to be a sense in which the actually colored parts of experience are instances of certain maps which are roughly of the form R^2 → R^3. (To be more precise, the range and domain will be certain subsets of R^2 and R^3.) But this doesn’t mean that a color experience can be identified with this mathematical object, or with a structurally isomorphic computational state.
You could say that my “methodology”, in attempting to construct a physical ontology that contains consciousness, is to discover as much as I can about the structure and constituent relations of a conscious experience, and then to insist that these are realized in the states of a physically elementary “state machine” rather than a virtual machine, because that allows me to be a realist about the “parts” of consciousness, and their properties.
Let’s compare the plausibility of getting colors out of combinations of the elementary properties in standard physical ontology, and the plausibility of getting colors out of Peano Arithmetic.
In one sense, there already is a demonstration that you can get colors from the combinations of the elementary properties in standard physical ontology: you can specify a brain in standard physical ontology. And, heck, maybe you can get colors out of Peano Arithmetic, too! ;)
At this point we have at least identified what we disagree on. I suspect that there is nothing more we can say about the topic that will affect each other’s opinion, so I’m going to withdraw from the discussion.
Now it is.
I disagree that colors do not exist in standard physical ontology, and find the claim rather absurd on its face. (I’m not entirely sure what ontology is, but I think I’ve picked up the meaning from context.)
See:
Brain Breakthrough! It’s Made of Neurons!
Hand vs. Fingers
Angry Atoms
I don’t know every last detail of how the experience of color is created by the interaction of of light waves, eyes, and neurons, but I know that that’s where it comes from.
An ontology is a theory about what it is that exists. I have to speak of “physical ontology” and not just of physics, because so many physicists take an anti-ontological or positivistic attitude, and say that physical theory just has to produce numbers which match the numbers coming from experiment; it doesn’t have to be a theory about what it is that exists. And by standard physical ontology I mean one which is based on what Galileo called primary properties, possibly with some admixture of new concepts from contemporary mathematics, but definitely excluding the so-called secondary properties.
So a standard physical ontology may include time, space, and objects in space, and the objects will have size, shape, and location, and then they may have a variety of abstract quantitative properties on top of that, but they don’t have color, sound, or any of those “feels” which get filed under qualia.
Asking “where is the experienced color in the physical brain?” shows the hidden problem here . We know from experience that reality includes things that are actually green, namely certain parts of experiences. If we insist that everything is physical, then that means that experiences and their parts are also physical entities of some kind. If the actually green part of an experience is a physical entity, then there must be a physical entity which is actually green.
For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location—the property of always being at some point in space—and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn’t actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles—e.g. “location of center of mass” or “having a part at location x0 and another part at x1″. We can even extend to counterfactual properties, e.g. “the property of flying apart if a heavy third particle were to fly past on a certain trajectory”.
To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that’s a little absurd. It is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue. The properties that are intrinsically available in standard physical ontology are much like arithmetic properties, but with a few additional “physical” predicates that can also enter into the definition.
I presume that most modern people don’t consider linguistic behaviorism an adequate account of anything to do with consciousness. Linguistic behaviorism is where you say there are no “minds” or “psychological states”, there are just bodies that speak. It’s the classic case of accounting for experience by only accounting for what people say about experience.
Cognitive theories of consciousness are considered an advance on this because they introduce a causal model with highly structured internal states which have a structural similarity to conscious states. We see the capacity of neurons to encode information e.g. in spiking rates, we see that there are regions of cortex to which visual input is mapped point by point, and so we say, maybe the visual experience of a field of color is the same thing as a sheet of visual neurons spiking at different rates.
But I claim they can’t be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.
Here I say there are two choices. Either you say that on top of the primary properties out of which standard physical ontology is built, there are secondary properties, like actual green, which are the building blocks of conscious experiences, and you say that the experiences dualistically accompany the causally isomorphic physical processes. Or you say that somewhere there is a physical object which is genuinely identical to the conscious experience—it is the experience—and you say that these neuronal sheets which behave like the parts of an experience still aren’t the thing itself, they are just another stage in the processing of input (think of the many anatomical stages to the pathways that begin at the optic nerve and lead onward into the brain).
There are two peculiarities to this second option. First, haven’t we already argued that the base properties available in physical ontology, considered either singly or in conjunction, just can’t be identified with the constituent properties of conscious states? How does positing this new object help, if it is indeed a physical object? And second, doesn’t it sound like a soul—something that’s not a network of neurons, but a single thing; the single place where the whole experience is localized?
I propose to deal with the second peculiarity by employing a quantum ontology in which entanglement is seen as creating complex single objects (and not just correlated behaviors in several objects which remain ontologically distinct), and with the first peculiarity by saying that, yes, the properties which make up a conscious state are elementary physical properties, and noting that we know nothing about the intrinsic character of elementary physical properties, only their causal and structural relations to each other (so there’s no reason why the elementary internal properties of an entangled system can’t literally and directly be the qualia). I take the structure of a conscious state and say, that is the structure of some complex but elementary entity—not the structure of a collective behavior (as when we talk about the state of a neuron as “firing” or “not firing”, a description which passes over the intricate microscopic detail of the exact detailed state).
The rationale of this move is that identifying the conscious state machine with a state machine based on averaged collective behaviors is really what leads to dualism. If we are instead dealing with the states of an entity which is complex but “fundamental”, in the sense of being defined in terms of the bottom level of physical description (e.g. the Hilbert spaces of these entangled systems), then it’s not a virtual machine.
Maybe that’s the key concept in order to get this across to computer scientists: the idea is that consciousness is not a virtual state machine, it’s a state machine at the “bottom level of implementation”. If consciousness is a virtual state machine—so I argue—then you have dualism, because the states of the state machine of consciousness have to have a reality which the states of a virtual machine don’t intrinsically have.
If you are just making a causal model of something, there’s no necessity for the states of your model to correspond to anything more than averaged behaviors and averaged properties of the real system you’re modeling. But consciousness isn’t just a model or a posited concept, it is a thing in itself, a definite reality. States of consciousness must exist in the true ontology, they can’t just be heuristic approximate concepts. So the choice comes down to: conscious states are dualistically correlated with the states of a virtual state machine, or conscious states are the physical states of some complex but elementary physical entity. I take the latter option and posit that it is some entangled subsystem of the brain with a large but finite number of elementary degrees of freedom. This would be the real physical locus of consciousness, the self, and you; it’s the “Cartesian theater” where diverse sensory information all shows up within the same conscious experience, and it is the locus of conscious agency, the internally generated aspect of its state transitions being what we experience as will.
(That is, the experience of willing is awareness of a certain type of causality taking place. I’m not saying that the will is a quale; the will is just the self in its causal role, and there are “qualia of the will” which constitute the experience of having a will, and they result from reflective awareness of the self’s causal role and causal power… Or at least, these are my private speculations. )
I’ll guess that my prose got a little difficult again towards the end, but that’s how it will be when we try to discuss consciousness in itself as an ontological entity. But hopefully the road towards the dilemma between dualism and quantum monism is a little clearer now.
Well, it sounds quite reasonable to me to say that if you arrange elementary particles in a certain, complicated way, you get an instance of something that experiences greenness. To me, this is no different than saying that that if you arrange particles in a certain, complicated way, you get a diamond. We just happen to know a lot more about what particle configurations create “diamondness” than “experience of green”ness. (As a matter of fact, we know exactly how to define “diamondness” as a function of particle type and arrangement.)
So, at this point I apply the Socratic method...
Are we in agreement that a “diamond” is a thing that exists? (My answer: Yes—we can recognize diamonds when we see them.)
Is the property “is a diamond” one that can be defined in terms of “quantitative and logical conjunctions of the properties of individual particles”? (My answer: Yes, because we know that diamonds are made of carbon atoms arranged in a specific pattern.)
Hopefully we agree on these answers! And if we do, can you tell me what the difference is between the predicate “is experiencing greenness” and “is a diamond” such that we can tell, in the real world, if something is a diamond by looking at the particles that make it up, and that it is impossible, in principle, to do the same for “is experiencing greenness”?
What I think your mistake is, is that you underestimate the scope of just what “quantitative and logical conjunctions of the properties of individual particles” can actually describe. Which is, literally, anything at all that can be described with mathematics, assuming you’re allowing all the standard operators of predicate logic and of arithmetic. And that would include the function that maps “arrangements of particles” as an input and returns “true” if the arrangement of particles included a brain that was experiencing green and “false” otherwise—even though we humans don’t actually know what that function is!
To sum up, I assert that you are mistaken when you say that there is is an ontological mismatch—the sheet of neurons does indeed contain the experience of green. You are literally making the exact same error that Eliezer’s strawman makes in Angry Atoms.
And if you don’t know how to create greenness, it is an act of faith on your part that it is done by phsyics as you understand it at all.
Perhaps, but physics has had a pretty good run so far...
The key phrase is “as you understand it”. 19th century physics doesn’t explain whatever device you wrote that on.
By talking about “experience of green”, “experiencing greenness”, etc, you get to dodge the question of whether greenness itself is there or not. Do you agree that there is something in reality that is actually green, namely, certain parts of experiences? Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?
No. Why do you believe there is? Because you seem to experience green? Since greenness is ontologically anomalous, what reason is there to think the experience isn’t illusion?
Well, I’m used to using the word “green” to describe objects that reflect certain wavelengths of light (which are interpreted in a certain way by the human visual system) and not experiences. As in, “This apple is green” or “I see something that looks green.” Which is why I used the expression “experience of greenness”, because that’s the best translation I can think of for what you’re saying into CronoDAS-English.
So the question
seems like a fallacy of equivocation to me, or possibly a fallacy of composition. It feels odd to me to say that a brain is green—after all, they don’t look green when you’re cutting open a skull to see what’s inside of it. If “green” in Mitchell-Porter-English means the same thing as “experiences the sensation of greenness” does in CronoDAS-English, then yes, I’ll definitely say that the set of particular physical entities in question possesses the property “green”, even though the same can’t be said of the individual point-particles which make up that collection.
(This kind of word-wrangling is another reason why I tried to stay out of this discussion in the past… trying to make sure we mean the same thing when we talk to each other can take a lot of effort.)
But you would have been using the word “green” before you knew about wavelengths of light, or had the idea that your experiences were somehow the product of your brain. Green originally denotes a very basic phenomenon, a type of color. As a child you may have been a “naive realist”, thinking that what you see is the world itself. Now you think of your experience as something in your brain, with causes outside the brain. But the experience itself has not changed. In particular, green things are still actually green, even if they are now understood as “part of an experience that is inside one’s brain” rather than “part of the world outside one’s body”.
“Interpretation” is too abstract a word to describe something as concrete as color. It provides yet another way to dodge the reality of color itself. You don’t say that the act of falling over is an “interpretation” of being in the Earth’s gravitation field. The green experiences are green, they’re not just “interpreted as green”.
Since we are assuming that our experiences are parts of our brains, this would be the wrong way to think about it anyway. Your experience of anything, including cutting open someone else’s skull, is supposed to be an object inside your own brain, and any properties of that experience are properties of part of your own brain. You won’t see the color in another brain by looking at it. But somehow, you see the color in your own brain by being it.
The latter expression again pushes away the real issue—is there such a thing as actual greenness or not. We earlier had some quotes from an Australian philosopher, JJC Smart, who would say there are “experiences of green”, but there’s no actual green. He says this because he’s a materialist, so he believes that all there is in reality is just neurons doing their thing, and he knows that standard physical ontology doesn’t contain anything like actual green. He has to deny the reality of one of the most obviously real things there is, but, at least he takes a stand.
On the other hand, someone else who talks about “experiences of green” might decide that what they mean is exactly the same thing as they would have meant by green, when they were a child and a direct realist. Talking about experience in this case is just a way to emphasize the adult understanding of what it is that one directly experiences—parts of your own brain, rather than objects outside it. But independent of this attitude, you still face a choice: will you say that yes, green is there in the same way it ever was, or will you say that it just can’t be, because physics is true and physics contains no such thing as “actual green”?
Lot of words there… I hope I’m understanding better.
This is what I’ve been trying to say: “Green” exists, and “green” is also present (indirectly) in physics. (I think.)
What does “present indirectly” mean?
Not one of the fundamental properties, but definable in terms of them.
In other words, present in the same way “diamond” is—there’s no property “green” in the fundamental equations of physics, but it “emerges” from them, or can (in principle) be defined in terms of them. (I’m embarrassed to use the word “emergent”, but, well...)
To use an analogy, there’s no mention of “even numbers” in the axioms of Peano Arithmetic or in first order logic, but S(S(0)) is still even; evenness is present indirectly within Peano Arithmetic. You can talk about even numbers within Peano Arithmetic by writing a formula fragment that is true of all even numbers and false for all other numbers, and using that as your “definition” of even. (It would be something like “Ǝy(S(S(0))y) = x)”.) If I understand correctly, “standard physical ontology” is also a formal system, so the exact same trick should work for talking about concepts such as “diamond” or “green”—we just don’t happen to know (yet) how to define “green” the same way we can define “diamond” or “even”, but I’m pretty sure that, in principle, there is a way to do it.
(I hope that made sense...)
Here I fall back on my earlier statement that this
Let’s compare the plausibility of getting colors out of combinations of the elementary properties in standard physical ontology, and the plausibility of getting colors out of Peano Arithmetic. I think the two cases are quite similar. In both cases you have an infinite tower of increasingly complex conjunctive (etc) properties that can be defined in terms of an ontological base, but getting to color just from arithmetic or just from points arranged in space is asking for magic. (Whereas getting a diamond from points arranged in space is not problematic.)
There are quantifiable things you can say about subjective color, for example its three-dimensionality (hue, saturation, brightness). The color state of a visual region can be represented by a mapping from the region (as a two-dimensional set of points) into three-dimensional color space. So there ought to be a sense in which the actually colored parts of experience are instances of certain maps which are roughly of the form R^2 → R^3. (To be more precise, the range and domain will be certain subsets of R^2 and R^3.) But this doesn’t mean that a color experience can be identified with this mathematical object, or with a structurally isomorphic computational state.
You could say that my “methodology”, in attempting to construct a physical ontology that contains consciousness, is to discover as much as I can about the structure and constituent relations of a conscious experience, and then to insist that these are realized in the states of a physically elementary “state machine” rather than a virtual machine, because that allows me to be a realist about the “parts” of consciousness, and their properties.
In one sense, there already is a demonstration that you can get colors from the combinations of the elementary properties in standard physical ontology: you can specify a brain in standard physical ontology. And, heck, maybe you can get colors out of Peano Arithmetic, too! ;)
At this point we have at least identified what we disagree on. I suspect that there is nothing more we can say about the topic that will affect each other’s opinion, so I’m going to withdraw from the discussion.