The first, as I think Yudkowsky states, is that qualia are not very well defined. Human introspection is unreliable in many cases, and we’re only consciously aware of a subset of processes in our brains. This means that the fact that zombies are conceivable doesn’t mean they are logically possible. When we examine what consciousness entails in terms of attention to mental processes, zombies might be logically impossible.
I think you are saying is that is we possessed detailed understanding of a mind, we might discover a reductive explanation of qualia. This is true, but for reasons given in my article this is an unwarranted assumption. And if it is a warranted assumption, do you agree (as I demonstrated in my article) that Yudkowsky could and therefore should have chosen to refute Chalmers in three sentences.
Second, one of the false intuitions humans have about consciousness goes something like this:
“If I draw up a schematic or simulation of my brain seeing a red field, I, personally, don’t then see what it is like to see the color red. Therefore, my schematic cannot be the whole story.”
Of course, this intuition is completely silly. A model of my brain doing something isn’t going to produce qualia in my own mind. Nevertheless, I think this intuition drives the Mary thought experiment. In the Mary experiment, Mary is omniscient about color and human vision and cognition, but has lived in a black and white environment all her life. When she sees red for the first time, she knows something more than she did before. (Though Dennet would say she now simply knows she can see the color red.)
This is a equivocation of the concept of a model. I you have a model in the form of a schematic on a piece of paper, then this is not going to produce in your brain the computations that we know with extreme likelihood (per Yudkowsky’s original argument) produce qualia. On the other hand, in the Mary thought experiment Mary has an incredibly large brain. Since she has by definition (yes indeed) a perfect “model” of a brain her model is in fact the brain itself, therefore (with extreme likelihood) her mind runs the same computations and produces the same qualia.
I think that people get thrown by imagining Mary as a human female, rather than a being of immense size.
If we change the zombie thought experiment to suppose that the being in question is less than omniscient, then it becomes more complicated. But even an approximate model of a brain, computationally accurate to 10 decimal places rather than to infinity, will obviously produce qualia and I submit that the uncertainty surrounding these qualia (in comparison to the original brain’s qualia) is no more than the uncertainty surrounding the physical state of the original brain – whereas in Yudkowsky’s argument version 1 is I summarised it, there is additional (albeit minute) uncertainty about the existence of these qualia.
If you object that a superintelligence could possess a model without this being “inside its mind”, I think that is beside the point of the thought experiment. Insofar as the superintelligent observer knows about the physical state of a volume of the Universe, it is expected to have no more uncertainty about qualia experienced within that volume than exists due to limitations of its physical understanding. If it possesses a model that produces accurate predictions regarding the physical behaviours of the humans in this volume of the Universe, the model must itself be running the computations that occur inside the brains of those humans. If the superintelligence is letting the model do all the work, then it is the “model” that is experiencing qualia since it is running the computations, and the superintelligence is a red herring since it is does not actually know anything about the physical state of said volume of the Universe. We have simply redefined the superintelligent observer to be some other process that runs the computations occurring inside human brains.
[deleted] comments on Repairing Yudkowsky’s anti-zombie argument