The first, as I think Yudkowsky states, is that qualia are not very well defined. Human introspection is unreliable in many cases, and we’re only consciously aware of a subset of processes in our brains. This means that the fact that zombies are conceivable doesn’t mean they are logically possible. When we examine what consciousness entails in terms of attention to mental processes, zombies might be logically impossible.
Second, one of the false intuitions humans have about consciousness goes something like this:
“If I draw up a schematic or simulation of my brain seeing a red field, I, personally, don’t then see what it is like to see the color red. Therefore, my schematic cannot be the whole story.”
Of course, this intuition is completely silly. A model of my brain doing something isn’t going to produce qualia in my own mind. Nevertheless, I think this intuition drives the Mary thought experiment. In the Mary experiment, Mary is omniscient about color and human vision and cognition, but has lived in a black and white environment all her life. When she sees red for the first time, she knows something more than she did before. (Though Dennet would say she now simply knows she can see the color red.)
As Bayesian reasoners, we have to ask ourselves, what might we expect if qualia do (versus do not) reduce to mechanistic processes?
If qualia do reduce to physics, then we would still find ourselves in the same situation as Mary. We don’t expect models of brains to produce qualia in the brains of the modeler. At the same time, there are good reasons to expect physical brains to have qualia as Antonio Damasio has described in Self Comes To Mind. On the other hand, if qualia could have had any conceivable value, why should they have happened to be the qualia consistent with reduction? Why couldn’t seeing a red field produce qualia consistent with seeing elephants on Thursdays?
Another way of putting this is to say that reductive inference isn’t expected to create qualia in the reasoner. When I model water as H2O, my model doesn’t feel moist! Rather, the inference works because the model predicts facts about water that didn’t have to be that way if water didn’t reduce. Similarly, reduction of minds to brains need not produce actual qualia in theorists. The theorists need only show that the alternatives get crushed in Bayesian fashion. The Mary experiment was supposed to show that reductionism was impossible, but it fails because the apparent qualia gap would exist whether or not we are mechanical.
The first, as I think Yudkowsky states, is that qualia are not very well defined. Human introspection is unreliable in many cases, and we’re only consciously aware of a subset of processes in our brains. This means that the fact that zombies are conceivable doesn’t mean they are logically possible. When we examine what consciousness entails in terms of attention to mental processes, zombies might be logically impossible.
I think what you are saying is that if we possessed detailed understanding of a mind, we might discover a reductive explanation of qualia. That is true, but for reasons given in my article it is unwarranted to assume that we would do so. And if it is a warranted assumption, do you agree (as I demonstrated in my article) that Yudkowsky could and therefore should have chosen to refute Chalmers in three sentences?
Second, one of the false intuitions humans have about consciousness goes something like this:
“If I draw up a schematic or simulation of my brain seeing a red field, I, personally, don’t then see what it is like to see the color red. Therefore, my schematic cannot be the whole story.”
Of course, this intuition is completely silly. A model of my brain doing something isn’t going to produce qualia in my own mind. Nevertheless, I think this intuition drives the Mary thought experiment. In the Mary experiment, Mary is omniscient about color and human vision and cognition, but has lived in a black and white environment all her life. When she sees red for the first time, she knows something more than she did before. (Though Dennet would say she now simply knows she can see the color red.)
This is equivocation of the concept of a model. If you have a simplified model in the form of a schematic on a piece of paper, then this is not going to produce in your brain the computations that we know with extreme likelihood (per Yudkowsky’s original argument) produce qualia. On the other hand, in the Mary thought experiment Mary has an incredibly large brain. Since she has by definition (yes indeed) a perfect “model” of a brain her model is in fact the brain itself, therefore her mind runs the same computations and (with extreme likelihood) produces the same qualia.
I think that people get thrown by imagining Mary as a human female, rather than a being of immense size.
If we change the zombie thought experiment to suppose that the being in question is less than omniscient, then it becomes more complicated. But even an approximate model of a brain, computationally accurate to 10 decimal places rather than to infinity, will obviously produce qualia and I submit that the uncertainty surrounding these qualia (in comparison to the original brain’s qualia) is no more than the uncertainty surrounding the physical state of the original brain – whereas in Yudkowsky’s argument version 1 as I summarised it, there is additional (albeit minute) uncertainty about the existence of these qualia.
If you object that a superintelligence could possess a model without this being “inside its mind”, I think that is beside the point of the thought experiment. Insofar as the superintelligent observer knows about the physical state of a volume of the Universe, it is expected to have no more uncertainty about qualia experienced within that volume than exists due to limitations of its physical understanding. If it possesses a model that produces accurate predictions regarding the physical behaviours of the humans in this volume of the Universe, the model must itself be running the computations that occur inside the brains of those humans. If the superintelligence is letting the model do all the work, then it is the “model” that is experiencing qualia since it is running the computations, and the superintelligence is a red herring since it is does not actually know anything about the physical state of said volume of the Universe. We have simply redefined the superintelligent observer to be some other process that runs the computations occurring inside human brains.
I think what you are saying is that if we possessed detailed understanding of a mind, we might discover a reductive explanation of qualia.
I think I’m saying more than this. We might find that it is impossible for beings like ourselves to not have qualia. By analogy, consider the Goldbach conjecture. It’s possibly try but not provable with a finite proof. But it’s also possibly false, and possibly provably so with a finite proof. It’s conceivable that the Goldbach conjecture is true, and conceivable that it is false, but only one of the two cases is logically possible.
On the other hand, in the Mary thought experiment Mary has an incredibly large brain. Since she has by definition (yes indeed) a perfect “model” of a brain her model is in fact the brain itself, therefore her mind runs the same computations and (with extreme likelihood) produces the same qualia.
I’m afraid I don’t see this. If qualia can be understood in terms of a model, then we can show that it reduces. But having a brain is not the same thing as having a model of a brain. Children have brains and can be certain of their qualia, but they have no model of their cognition.
The qualia that Chalmers is talking about is what distinguishes first-person experience from third-person experience. Even knowing everything material about how you think and behave, I still don’t know what your first-person experience is like in terms of my own first-person experience. In fact, knowing another person’s first-person experience in terms of my own might not be possible because of indeterminacy of translation. Even being in possession of a perfect model of your brain doesn’t obviously tell me exactly what your first-person experience is like. This is the puzzle that drives the zombie/anti-reductionist stance.
What I am saying beyond this is two-fold. First, even if the perfect model is of my own brain, there’s still a gap between my first-person experience and my “third-person” understanding of my own brain. In other words, finding a gap isn’t evidence for non-reductionism.
Second, the gap doesn’t invalidate the reductive inference if the reductive inference wouldn’t allow you to bridge the gap in any case.
How does this weigh on the zombie argument?
Well, frankly, we’re a lot more confident in physicalism based on the evidence than we are in the lack of flaws in the zombie argument.
It’s certainly possible that we’re talking at cross purposes or that I don’t understand your claim. Are you making a distinction between first-person experience and third-person knowledge of brains? The typical philosopher’s response would be that a superintelligence has exactly the same problem as we do.
The first, as I think Yudkowsky states, is that qualia are not very well defined. Human introspection is unreliable in many cases, and we’re only consciously aware of a subset of processes in our brains. This means that the fact that zombies are conceivable doesn’t mean they are logically possible. When we examine what consciousness entails in terms of attention to mental processes, zombies might be logically impossible.
I think you are saying is that is we possessed detailed understanding of a mind, we might discover a reductive explanation of qualia. This is true, but for reasons given in my article this is an unwarranted assumption. And if it is a warranted assumption, do you agree (as I demonstrated in my article) that Yudkowsky could and therefore should have chosen to refute Chalmers in three sentences.
Second, one of the false intuitions humans have about consciousness goes something like this:
“If I draw up a schematic or simulation of my brain seeing a red field, I, personally, don’t then see what it is like to see the color red. Therefore, my schematic cannot be the whole story.”
Of course, this intuition is completely silly. A model of my brain doing something isn’t going to produce qualia in my own mind. Nevertheless, I think this intuition drives the Mary thought experiment. In the Mary experiment, Mary is omniscient about color and human vision and cognition, but has lived in a black and white environment all her life. When she sees red for the first time, she knows something more than she did before. (Though Dennet would say she now simply knows she can see the color red.)
This is a equivocation of the concept of a model. I you have a model in the form of a schematic on a piece of paper, then this is not going to produce in your brain the computations that we know with extreme likelihood (per Yudkowsky’s original argument) produce qualia. On the other hand, in the Mary thought experiment Mary has an incredibly large brain. Since she has by definition (yes indeed) a perfect “model” of a brain her model is in fact the brain itself, therefore (with extreme likelihood) her mind runs the same computations and produces the same qualia.
I think that people get thrown by imagining Mary as a human female, rather than a being of immense size.
If we change the zombie thought experiment to suppose that the being in question is less than omniscient, then it becomes more complicated. But even an approximate model of a brain, computationally accurate to 10 decimal places rather than to infinity, will obviously produce qualia and I submit that the uncertainty surrounding these qualia (in comparison to the original brain’s qualia) is no more than the uncertainty surrounding the physical state of the original brain – whereas in Yudkowsky’s argument version 1 is I summarised it, there is additional (albeit minute) uncertainty about the existence of these qualia.
If you object that a superintelligence could possess a model without this being “inside its mind”, I think that is beside the point of the thought experiment. Insofar as the superintelligent observer knows about the physical state of a volume of the Universe, it is expected to have no more uncertainty about qualia experienced within that volume than exists due to limitations of its physical understanding. If it possesses a model that produces accurate predictions regarding the physical behaviours of the humans in this volume of the Universe, the model must itself be running the computations that occur inside the brains of those humans. If the superintelligence is letting the model do all the work, then it is the “model” that is experiencing qualia since it is running the computations, and the superintelligence is a red herring since it is does not actually know anything about the physical state of said volume of the Universe. We have simply redefined the superintelligent observer to be some other process that runs the computations occurring inside human brains.
The first, as I think Yudkowsky states, is that qualia are not very well defined. Human introspection is unreliable in many cases, and we’re only consciously aware of a subset of processes in our brains. This means that the fact that zombies are conceivable doesn’t mean they are logically possible. When we examine what consciousness entails in terms of attention to mental processes, zombies might be logically impossible.
Second, one of the false intuitions humans have about consciousness goes something like this:
“If I draw up a schematic or simulation of my brain seeing a red field, I, personally, don’t then see what it is like to see the color red. Therefore, my schematic cannot be the whole story.”
Of course, this intuition is completely silly. A model of my brain doing something isn’t going to produce qualia in my own mind. Nevertheless, I think this intuition drives the Mary thought experiment. In the Mary experiment, Mary is omniscient about color and human vision and cognition, but has lived in a black and white environment all her life. When she sees red for the first time, she knows something more than she did before. (Though Dennet would say she now simply knows she can see the color red.)
As Bayesian reasoners, we have to ask ourselves, what might we expect if qualia do (versus do not) reduce to mechanistic processes?
If qualia do reduce to physics, then we would still find ourselves in the same situation as Mary. We don’t expect models of brains to produce qualia in the brains of the modeler. At the same time, there are good reasons to expect physical brains to have qualia as Antonio Damasio has described in Self Comes To Mind. On the other hand, if qualia could have had any conceivable value, why should they have happened to be the qualia consistent with reduction? Why couldn’t seeing a red field produce qualia consistent with seeing elephants on Thursdays?
Another way of putting this is to say that reductive inference isn’t expected to create qualia in the reasoner. When I model water as H2O, my model doesn’t feel moist! Rather, the inference works because the model predicts facts about water that didn’t have to be that way if water didn’t reduce. Similarly, reduction of minds to brains need not produce actual qualia in theorists. The theorists need only show that the alternatives get crushed in Bayesian fashion. The Mary experiment was supposed to show that reductionism was impossible, but it fails because the apparent qualia gap would exist whether or not we are mechanical.
I think what you are saying is that if we possessed detailed understanding of a mind, we might discover a reductive explanation of qualia. That is true, but for reasons given in my article it is unwarranted to assume that we would do so. And if it is a warranted assumption, do you agree (as I demonstrated in my article) that Yudkowsky could and therefore should have chosen to refute Chalmers in three sentences?
This is equivocation of the concept of a model. If you have a simplified model in the form of a schematic on a piece of paper, then this is not going to produce in your brain the computations that we know with extreme likelihood (per Yudkowsky’s original argument) produce qualia. On the other hand, in the Mary thought experiment Mary has an incredibly large brain. Since she has by definition (yes indeed) a perfect “model” of a brain her model is in fact the brain itself, therefore her mind runs the same computations and (with extreme likelihood) produces the same qualia.
I think that people get thrown by imagining Mary as a human female, rather than a being of immense size.
If we change the zombie thought experiment to suppose that the being in question is less than omniscient, then it becomes more complicated. But even an approximate model of a brain, computationally accurate to 10 decimal places rather than to infinity, will obviously produce qualia and I submit that the uncertainty surrounding these qualia (in comparison to the original brain’s qualia) is no more than the uncertainty surrounding the physical state of the original brain – whereas in Yudkowsky’s argument version 1 as I summarised it, there is additional (albeit minute) uncertainty about the existence of these qualia.
If you object that a superintelligence could possess a model without this being “inside its mind”, I think that is beside the point of the thought experiment. Insofar as the superintelligent observer knows about the physical state of a volume of the Universe, it is expected to have no more uncertainty about qualia experienced within that volume than exists due to limitations of its physical understanding. If it possesses a model that produces accurate predictions regarding the physical behaviours of the humans in this volume of the Universe, the model must itself be running the computations that occur inside the brains of those humans. If the superintelligence is letting the model do all the work, then it is the “model” that is experiencing qualia since it is running the computations, and the superintelligence is a red herring since it is does not actually know anything about the physical state of said volume of the Universe. We have simply redefined the superintelligent observer to be some other process that runs the computations occurring inside human brains.
I think I’m saying more than this. We might find that it is impossible for beings like ourselves to not have qualia. By analogy, consider the Goldbach conjecture. It’s possibly try but not provable with a finite proof. But it’s also possibly false, and possibly provably so with a finite proof. It’s conceivable that the Goldbach conjecture is true, and conceivable that it is false, but only one of the two cases is logically possible.
I’m afraid I don’t see this. If qualia can be understood in terms of a model, then we can show that it reduces. But having a brain is not the same thing as having a model of a brain. Children have brains and can be certain of their qualia, but they have no model of their cognition.
The qualia that Chalmers is talking about is what distinguishes first-person experience from third-person experience. Even knowing everything material about how you think and behave, I still don’t know what your first-person experience is like in terms of my own first-person experience. In fact, knowing another person’s first-person experience in terms of my own might not be possible because of indeterminacy of translation. Even being in possession of a perfect model of your brain doesn’t obviously tell me exactly what your first-person experience is like. This is the puzzle that drives the zombie/anti-reductionist stance.
What I am saying beyond this is two-fold. First, even if the perfect model is of my own brain, there’s still a gap between my first-person experience and my “third-person” understanding of my own brain. In other words, finding a gap isn’t evidence for non-reductionism.
Second, the gap doesn’t invalidate the reductive inference if the reductive inference wouldn’t allow you to bridge the gap in any case.
How does this weigh on the zombie argument?
Well, frankly, we’re a lot more confident in physicalism based on the evidence than we are in the lack of flaws in the zombie argument.
It’s certainly possible that we’re talking at cross purposes or that I don’t understand your claim. Are you making a distinction between first-person experience and third-person knowledge of brains? The typical philosopher’s response would be that a superintelligence has exactly the same problem as we do.