Repairing Yudkowsky’s anti-zombie argument
Eliezer Yudkowsky argues with David Chalmers here on the subject of “philosophical zombies”. I submit that, although Yudkowsky’s position on this question is correct, his argument fails to establish what he claims it to.
To summarise Yudkowsky and Chalmers’s argument:
1. Both Yudkowsky and Chalmers agree that humans possess “qualia”.
2. Chalmers argues that a superintelligent being which somewhow knew the positions of all particles in a large region of the Universe would need to be told as an additional fact that any humans (or other minds possessing qualia) in this region of space possess qualia – it could not deduce this from mere perfect physical knowledge of their constituent particles. Therefore, qualia are in some sense extra-physical.
3. Yudkowsky argues that such a being would notice that humans discuss at length the fact that they possess qualia, and their internal narratives also represent this fact. It is extraordinarily improbable that beings would behave in this manner if they did not actually possess qualia. Therefore an omniscient being would conclude that it is extremely likely that humans possess qualia. Therefore, qualia are not extra-physical.
My objection to Yudkowsky’s argument is that it is not enough merely to demonstrate that the omniscient being would find it extremely likely that humans possess qualia. Probability is a state of partial information; therefore unless the being is certain that humans possess qualia, it is not in fact omniscient regarding this region of the Universe despite the fact that it is postulated to possess perfect physical knowledge about it.
I expect that some Lesswrongians may object to this on account of the fact that 1 and 0 are not probabilities. However, the thought experiment postulates an omniscient being that possesses perfect knowledge about the physical state of a region of the Universe, therefore in the thought experiment absolute certainty is defined to be possible. If this is objectionable*, then the entire argument is badly posed including Yudkowsky’s contribution.
Additionally (although superfluously), it seems that the thought experiment should generalise to any possible configuration of particles in a region of the Universe, since we are trying to prove that qualia are not extra-physical under any circumstances, and a proof of this should not rely on contingent features of the qualia-experiencing beings under consideration. Therefore let us suppose that due to a miracle of quantum tunnelling, the only qualia-experiencing being in the region of space in question is a newborn human infant (I presume wide agreement upon the fact that such an infant does in fact possess qualia.) Is it still the case that the omniscient being can deduce, from the infant’s mental behaviours, the extreme likelihood of its possessing qualia? After all, it doesn’t write philosophy papers and may not even have an internal narrative.
My own solution to the zombie problem is that it is a restatement of the Mary’s room problem. In the zombie thought experiment we are dealing with a mind that has perfect knowledge of a physical human brain (or any brain that produces qualia). Since the Universe’s computational accuracy appears to be infinite, in order for the mind to be omniscient about a human brain it must be running the human brain’s quark-level computations within its own mind; any approximate computation would yield imperfect predictions. In the act of running this computation, the brain’s qualia are generated, if (as we have assumed) the brain in question experiences qualia. Therefore the omniscient mind is fully aware of all of the qualia that are experienced within the volume of the Universe about which it has perfect knowledge.
It is legitimate for us to believe with extremely high probability that the computations occurring in a brain are causally related to the qualia that it produces, for the reasons that Yudkowsky has given; therefore, it is fair for us to state (as I did in the preceding paragraph) with extremely high probability that when it runs a brain’s computations the omniscient mind will experience the same qualia that the brain’s original owner does. The distinction is that we can be extremely confident (by Yudkowsky’s reasoning) that the omniscient mind will itself be certain (by my reasoning) about the existence of qualia within the volume of the Universe about which it has perfect knowledge – whereas if one is trying to prove that qualia are not extra-physical, it insufficient to argue (as Yudkowsky did) that the omniscient mind will itself only be extremely confident about the existence of qualia within the volume of the Universe about which it has perfect knowledge.
There is an objection to the above argument that I would expect readers to suggest. The objection is that I have misinterpreted Yudkowsky’s argument, and in fact my summary of Yudkowsky and Chalmers’s argument should read as follows:
3. Yudkowsky argues that such a being would notice that humans discuss at length the fact that they possess qualia, and their internal narratives also represent this fact. It would use its perfect knowledge of their mental processes to investigate the chain of reasoning that leads humans to refer to themselves as being “aware” and possessing “qualia”. It would thereby discover the cause of their discussing these things, which is extremely likely to provide it with a reduction of the qualia concept. Therefore the omniscient mind will with extremely high probability obtain for itself perfect certainty that the qualia-experiencing beings within the region of the Universe about which it has perfect knowledge do in fact experience qualia.
If this argument was valid, it would have exactly the same outcome as my own conclusion (and a different outcome to Yudkowsky’s argument as I summarised it earlier): we can be extremely confident that the omniscient mind will itself be certain about the existence of qualia within the volume of the Universe about which it has perfect knowledge (rather than: the omniscient mind will be extremely confident about the existence of qualia within the volume of the Universe about which it has perfect knowledge).
I don’t interpret Yudkowsky’s argument in this way, but since it is a closely related meaning I cannot be very sure that he does not intend the above. In any case, I believe that this argument begs the question.
The revised argument assumes that the qualia concept is almost certainly reducible. However, doubt regarding this appears to be the entire motivation for Yudkowsky and Chalmers’s debate. If Yudkowsky were to regard it is a given that qualia are reducible, then why not replace his 6,600 word post with the following: the thesis of reductionism is proven beyond reasonable doubt to be true. Therefore qualia, like other phenomena that we observe, must be reducible to the level of quarks. Therefore if a mind possesses perfect knowledge about a region of the Universe at the level of quarks, it necessarily understands qualia and recognises their existence because qualia are merely higher-order phenomena composed of quarks. QED.
Since Yudkowsky did not do that, I presume that he does not believe that the reducibility of qualia can be taken for granted. Regardless of what he believes, I would nonetheless criticise the revised argument on the basis that qualia should not be assumed to be reducible.
An observation about qualia: they do not appear to be susceptible to definition. All of the definitions of qualia that I have encountered have been either nonsensical, or liable to be interpreted (à la Dennett) as mere computational properties of the brain. And definition, done properly, is in fact the same act as philosophical reduction.
For example, qualia might be defined as “subjective qualities of experience”. Subjective in this context means the same thing as incommunicable. A definition of something as essentially incommunicable is tantamout to defining it as indefinable, which is nonsense. “The inner listener” and similar attempted definitions are interpreted by Dennett in the sense of the brain’s having a parallel computational structure – not the intended referent at all. And the “mysterious redness of red” could either be interpreted as a reference to the nature of redness as a derived or computational property of objects rather than a fundamental property, or as gibberish (since there are no inherently mysterious phenomena).
Nonetheless, since (I believe with extremely high probability) we all possess qualia, we are able to figure out the intended referent of words like “qualia” and “consciousness” (at least before the word consciousness was philosophically co-opted by Dennett). We are aware of the existence of one indefinable concept, and we can shore up our mutual recognition of terms that refer to it by discussing the apparent relationship between the properties of this concept and the state of our physical brains, which seem perfectly reducible to quarks and susceptible to definition.
In this post, Yudkowsky writes:
Suppose that a 747 had a fundamental physical existence apart from the quarks making up the 747.
What experimental observations would you expect to make, if you found yourself in such a universe?
If you can’t come up with a good answer to that, it’s not observation that’s ruling out “non-reductionist” beliefs, but a priori logical incoherence. If you can’t say what predictions the “non-reductionist” model makes, how can you say that experimental evidence rules it out?
My thesis is that non-reductionism is a confusion; and once you realize that an idea is a confusion, it becomes a tad difficult to envision what the universe would look like if the confusion were true.
May I offer, as a suggestion of what an authentic irreducible concept looks like, qualia?
There are two reasons why it is rational to hold a completely reductionist view of the Universe. Firstly, reductionism is a historically successful means of explaining things and solving problems, and always defeats non-reductionism. Secondly, non-reductionism is a priori logically incoherent in the sense that Yudkowsky describes.
However, what if the qualia concept is actually a counter-example to both of these? Of course it’s far too early to conclude that reductionist means have failed to explain qualia – we need to learn more about the brain first. But is it not also reasonable to suggest that our experience of qualia is exactly what we would expect the universe to look like, if irreducible phenomena were to exist? We almost all agree that qualia exist, and in fact my belief in the existence of qualia is the last belief of which I can imagine anyone dissuading me, yet we have thus far (despite much philosophical inquiry and a fair amount of neuroscience) been incapable of reducing the concept to the slightest extent. By comparison concepts such as shouldness and couldness, which are complex computational properties of the brain, have already been reduced a level or two within our map of reality.
To summarise, I argue that neither version of Yudkowsky’s anti-zombie argument, as I interpret it, is sound. The first version is unsound because it fails to demonstrate that an omniscient mind possesses the same level of confidence in its judgements about qualia as it does about the physical Universe, and the second version is unsound because it presumes that qualia are reducible, which is unwarranted. I also propose my own anti-zombie argument, which I believe demonstrates that qualia are not in fact extra-physical (although they may yet be irreducible).
*I see the premise of an agent with perfect knowledge as a helpful simplification in the thought experiment. But I believe that it could be exchanged for a fallible superintelligence without the conclusions changing. The contrast between perfect certainty and infinitesimal uncertainty would be replaced by minor uncertainty and infinitesimally increased uncertainty. The problem still exists with Yudkowsky’s argument that if qualia are not extra-physical, then uncertainty about qualia should be no greater than total uncertainty regarding the physical configuration of reality.
EDIT:
It seems to me that I should have paid more attention to (i.e. re-read) Eliezer’s post “the generalised anti-zombie principle” before writing this article. Because in it he states:
“Consciousness, whatever it may be—a substance, a process, a name for a confusion—is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud. The fact that I have typed this paragraph would at least seem to refute the idea that consciousness has no experimentally detectable consequences.”
...
Could we define the word “consciousness” to mean “whatever actually makes humans talk about ‘consciousness’”? This would have the powerful advantage of guaranteeing that there is at least one real fact named by the word “consciousness”. Even if our belief in consciousness is a confusion, “consciousness” would name the cognitive architecture that generated the confusion.
So Vladimir Nesov appears to be correct in that I was wrong to assume that Yudkowsky was necessarily referring to the same kind of “qualia” or “consciousness” that Chalmers was.
Rather than further delving into guesses about Yudkowsky’s intentions, I’ll just attempt to clarify my conclusions in general:
1. I notice that most humans regard themselves as possessing “qualia”. Formerly this concept might have been known as “consciousness”, but at this point that term is ambiguous. No-one seems able to define qualia, despite their insistence that it is a real concept. Qualia are considered to be related to brain processes in some way.
2. Writers such as Dennett believe that they can (or have already) described approximately the physical process by which the brain computes an internal narrative containing statements such as “I am aware that I am aware”, and similar references that many would understand as referring to qualia. Specifically, according to wikipedia: “Dennett’s view of consciousness is that it is the apparently serial account for the brain’s underlying parallelism.”
3. As I believe I have demonstrated, if we assume that qualia exist then we are bound to believe with extremely high probability that a being that has perfect knowledge of a physical brain is fully aware of the qualia that are being produced in this brain. It has no more uncertainty about qualia than it has about the physical brain states. I also believe that Yudkowsky’s argument did not prove this, for reasons already stated.
4. “3” does not allow us to conclude that “qualia” are the cause of our making statements such as “I am aware that I am aware”. These statements can be explained according to Dennett’s eliminative materialist view, in which qualia are held not to exist on the basis that the concept is resolutely indefinable and therefore unreal. However although “qualia” are apparently indefinable, since a given human’s degree of belief in the existence of qualia is typically extremely high, many humans are unwilling to apply normal standards to the concept. One possibility, which I favour, is to expect that we will ultimately discover that the brain produces statements about consciousness for a reason along the lines of what Dennett describes, but that this physically instantiated computation serves a double role* both as a reducible causal explanation for why we talk about “qualia” and in producing the irreducible phenomenon of qualia by means of a psycho-physical bridging law. This would mean that qualia supervene upon brain states, but the causality operates only in one direction. In this scenario qualia are an irreducible phenomenon, and since our means of investigating the world necessarily involve the tool of reduction we cannot expect to understand qualia or any putative “psycho-physical bridging law”. Although this appears to be similar to irrational scientific confusions of the past, there is reason (as I have argued) to view qualia as a legitimate exception.
5. Alternatively, it may turn out to be the case that when we learn more about the brain, we will discover a reduction of qualia that satisfies everyone who claims to experience qualia. In this case “qualia” would indeed be the cause of our referring to ourselves as possessing qualia, and qualia would also be seen to be real. However, as I have discussed the concept of qualia possesses unusual features that make this seem somewhat unlikely.
*Think of it as being the algorithm, and “how the algorithm feels from inside”. In this case, since we are the algorithm, both are real!
So on reflection, it appears that I have quite a large disagreement with Yudkowsky:
It seems to me that Yudkowsky’s argument here does not prove what it is supposed to prove – that complete physical knowledge entails complete knowledge of qualia – and this is necessary in order for qualia not to be some entirely airy-fairy mystical concept that neither of us agrees with. The only way in which it could be interpreted to prove that, is if we simply assume that the concept of irreducible qualia is disallowed and he only means to refer to “reducible qualia” or “qualia-eliminative reducible consciousness” – this is both unwarranted, and sheds a rather confusing light on why Yudkowsky took 6,600 words to make his point.
If Yudkowsky is an eliminativist about “qualia”, i.e. he is actually attempting to prove that complete physical knowledge entails complete knowledge of reducible, qualia-eliminative consciousness (per Dennett), he is both in conflict with a majority of people’s extremely strong beliefs and he could have countered Chalmers with a very brief argument stating simply that talk about consciousness necessarily has a cause, and this cause must be a computation in the brain reducible to quarks.
If Yudkowsky is a reductionist regarding “qualia”, i.e. he is actually attempting to prove that complete physical knowledge entails complete knowledge of reducible qualia only, the question remains why qualia still appear to be entirely elusive to definition and again he could have countered Chalmers with a very brief argument stating simply that talk about qualia necessarily has a cause, and this cause must be a computation in the brain reducible to quarks.
If on the other hand Yudkowsky is ambivalent about these options, I don’t believe that it is sensible for him to refer under a single banner “consciousness” to the concepts “irreducible qualia” and “reductive explanation of qualia” and “reductive qualia-eliminative explanation of consciousness” because statements that are true of one are not true of the others. So he should stick to arguing about the consequences of one at a time.
For example, despite the fact that I agree that complete physical knowledge entails complete knowledge of irreducible qualia, I don’t see how this proves that irreducible qualia are the cause of our making statements such as “I am aware that I am aware”. So if Yudkowsky argues that complete physical knowledge entails complete knowledge of consciousness as though this proved that consciousness is the cause of our making statements such as “I am aware that I am aware” then he must only mean consciousness in the sense of reducible qualia or qualia-eliminative consciousness. And I have already discussed the problem with assuming these concepts at the expense of irreducible qualia, and asked the question why it takes 6,600 words to refute Chalmers under those assumptions.
- 5 Nov 2011 10:41 UTC; 0 points) 's comment on Human consciousness as a tractable scientific problem by (
Here is a summary of all discussions of qualia there have ever been:
Qualia cannot be reducible to physics because we cannot see how they could be.
Qualia must be reducible to physics because we cannot see how anything could not be.
“One man’s modus ponens is another man’s modus tollens.”
IIRC (and looking through the article quickly), Yudkowsky doesn’t include omniscient beings in his argument. Furthermore, as far as I can see the topic of his argument is a certain confusion that would already be resolved by the time you can pose the question formally (that is, to an “omniscient being”).
A conceptual error here seems to be attributing to “omniscience” the ability to clarify a confusion.
Vladimir, I should have looked more carefully before replying to this comment the first time. Because Eliezer actually said:
So it is in fact plainly untrue that “Yudkowsky doesn’t include omniscient beings in his argument”. That is explicitly the context of the discussion.
He says this:
I interpret “knowing the positions of all the atoms in the universe” as omniscience, and since Eliezer sets out the problem this way, this is what I interpret the problem to be.
Then, to paraphrase Nesov’s point which you ignored: A conceptual error here seems to be attributing to “knowing the positions of all the atoms in the universe” the ability to clarify a confusion.
In that case, what exactly is the thought experiment supposed to prove?
If we suppose that this being can’t clarify confusions merely by virtue of its knowing the positions of all the atoms of the Universe on an ongoing basis, then if it doesn’t understand qualia this is merely one confusion amongst many mundane physical confusions. So there would be nothing “extra-physical” about its failure to understand qualia, since it doesn’t understand certain high-level phenomena that everyone agrees to be “physical”.
Under this interpretation the debate seems pointless to me.
Yudkowsky explained how the being knowing the positions of all the atoms in the universe could use human level reasoning to arrive high confidence that “qualia” refers to something real.
You criticize this argument claiming this being is omniscient and therefore should know with certainty that qualia are real if they are. Your criticism fails because the being does not have omniscient level ability to make logical inferences and resolve confusions, it uses only the human level reasoning Yudkowsky is able to explain and attribute to it. (And Yudkowksy’s argument never used more knowledge of physics than we have. Really, the level of confidence we attribute to this being is the level of confidence we have that a totally omnicsient being would know for certain that qualia exist. Obviously, if we could explain why an omnicient being would have more confidence in our position that we do, we would already have that higher level of confidence.)
To develop this point: if logical inferences are the “Ethereum” to the “Bitcoin” of mere omniscience about patterns of information; or, to use a more frivolous metaphor, David Bowie’s “The Next Day” in comparison to “Heroes”, then I think this was a concept that was missing from OP’s headline argument.
I couldn’t follow you here...
On rereading, I noticed that the sentence “Your criticism fails because the being does not have omniscient level ability to make logical inferences and resolve confusions” was missing the word “not” which screws up the meaning. Anything else not make sense?
It still seems off, as in pointing out strangely irrelevant things, but it’s 3AM, so I might be missing an obvious motivation for what you’re saying...
On re-reading Eliezer’s zombie post, I noticed that he also said:
Although “resolving confusions” may be another thing entirely, this seems worth pointing out.
So knowledge of the positions of all the atoms in the Universe turns out in this case to be an irrelevance, since Yudkowsky’s argument applies to humans just as much as this hypothetical being using “human level reasoning”.
If this were really the intended thought experiment, it proves nothing about physicalism vs extra-physicalism, or the logical impossibility of p-zombies. We just learn that it’s very likely that humans all possess qualia. So why did Yudkowsky even mention this being that knows the positions of all the atoms, and why does he claim that the debate is about disproving extra-physicalism?
Whereas the omniscient intelligence thought experiment, in which the being can actually use its information to resolve all kinds of multi-level confusions, actually tests the question whether there is some extra causal factor responsible for the reality of “awareness”, “qualia” or “consciousness” (whatever each of them considered himself to be discussing) besides mere physics. Which might just be reason to suspect that that is what Yudkowsky actually intended.
That detailed knowledge of physics doesn’t help is at least the point I made in the top-level comment. “Information” isn’t magic, you can’t fix a broken question with more information (i.e. ability to answer questions better).
Yudkowsky’s argument was focusing on origins of the question (concept of qualia), on reasons for it getting asked. The article presents motivation for interpreting the question as referring to properties of physical world. (See also: A Priori.)
That makes some sense.
But might we not expect that computations occurring in the human brain will turn out to offer a causal account of why we refer to ourselves as possessing qualia (much as Dennett aims to describe in Consciousness Explained) without this satisfying the large majority of philosophers and general public that their “qualia”—the indefinable concept of whose nonexistence they can nonetheless not be persuaded—have been explained at all rather than simply ignored in favour of some eliminativist “consciousness”.
As far as I am concerned, if such an account did emerge upon our investigating the brain in detail (as I expect it would) then I would not accept that qualia had been explained and would continue to believe that the selfsame computations occurring in the brain also produce via psycho-physical bridging laws the phenomenon of “qualia”, which is irreducible.
Since qualia are such an intensely personal affair, I don’t see that Bayes’s Theorem could ever have anything sensible to say as regards the rationality of this belief without its begging the question.
So as I see it, if Yudkowsky is presuming that “qualia” either don’t exist or are reducible to quarks then a) his argument was way too long-winded—he could just have said that intelligence + knowledge about quarks ⇒ understanding of anything built from quarks b) this is not an assumption that anyone else is rationally compelled to make
You seem to be criticising conclusion “a”. But surely Yudkowsky has to pick from one of the three choices: qualia do not exist, qualia exist and are reducible to quarks, qualia exist and are not reducible to quarks. If he believes none of these, arguing instead that the qualia concept is a confusion, then he believes that qualia do not exist as far as I’m concerned. This is because we are already incapable of defining qualia, and our agreement to use this word is based on the fact that we are all aware of the existence of one indefinable concept that appears to have a relationship with our brain states – not on any possible definition of “qualia”. If Yudkowsky claims that “qualia” is a confusion, then this is a refusal to accept the consensus regarding the name of the indefinable concept—which constitutes a simple rejection of the concept. In any other case this rationality trick (dissolving the question, righting a wrong question or however you wish to put it) works, but in the case of an inherently indefinable concept it does not.
...And if Yudkowsky accepts the possibility that irreducible qualia can exist in the sense that I describe, which I must admit was my unwarranted assumption in the original article, then I don’t feel that he actually managed to prove in his argument with Chalmers that (we can be extremely confident that) the existence of qualia is entirely dependent on properties of the physical world. This is the origin of the supposed correction that I made to his argument, because that can in fact be proved. I hope that that particular point has already been made very clear in my article.
Assuming that there’s a good deal of rich content in the world we don’t understand that is covered by our label “qualia”, the explanation for our use of a single label is that the generally undifferentiated confusion us label-makers have all feels the same from the inside.
The actual content of this unknown area might be divided in three equal parts, two new weird concepts (unphysicality and fundamental irreducibility) and one mundane one already constituting some of our mental maps (reductionism).
Our use of a single word for a lump of confusion doesn’t strongly imply that there is one underlying concept.
There probably is only one explanation, rather than two or especially three, for why we feel we don’t fully understand because that is a simpler explanation. But my point is that the use of a single new label does not imply that the things described are a single new concept—they could be parts of two new and one old, or all one new, or all one old.
Lessdazed, I was trying to argue that the use of a single word renders Yudkowsky’s arguments untrue, unless he is in fact presuming certain facts about this “confusion”. The implications of what he is arguing differ depending on the features of the “confusion” as I was grasping towards in my article and finally pointed out there.
I also suggested that if Bayesian rationality were to tell me that I don’t have qualia (i.e. if a thorough investigation of the brain found only an eliminative materialist explanation of the confusion “consciousness”) then I would view that as a refutation of the general applicability of Bayesian rationality to this unique case rather than a refutation of qualia. That is a measure of my confidence that I do have qualia. This may be attractive to negative karma, but I believe that it would be the actual humanly realistic response (in the scenario that investigation of the brain resolves the confusion in this particular way) of most Bayesians. It is also somewhat a restatement of Richard Kennaway’s aphorism here
I await (more in hope than expectation) recognition of the fact that Yudkowsky’s argument fails to refute extra-physicalism, or any explicit defence of the idea that he refuted it.
Furthermore, if you do still feel that it is legitimate for Yudkowsky to bundle together all possible referents of “consciousness”, “awareness” and “qualia” together into one “confusion”, then his argument did actually fail to disprove the likelihood of extra-physicality and my correction is still needed!
It’s that the end result (speech) is physical, so any explanation of the world saying qualia are mystical in addition to a generally physicalist picture of the world still has to have a physical interface with the mystical where the purely physical is disturbed by mystical forces. People’s physical words (supposedly caused by qualia in a spiritual realm) would be traceable back to severed nerve endings that magically lit up with energy (not caused by the physical system) to eventually cause those words about internal experience.
On the other hand, if people’s explanations of their experiences are only caused by physical processes unrelated to their experiencing something genuinely spiritual, even if such a spiritual thing were to (be logically intelligible and) exist, then there would still be no evidence or reason to believe in the mystical nature of qualia because people’s explanations and experiences are fully explained by the physical.
It’s not a probabilistic argument, it’s a damned-if-you-do-and-damned-if-you-don’t argument.
Lessdazed, in other words you are presuming that qualia are a reducible concept that we can in principle break down through various levels into quarks. Firstly, if this is a legitimate presumption do you agree that Yudkowsky’s argument is entirely surplus to requirements (and therefore misleading), since as I demonstrate in my article we can use this presumption to refute Chalmers in a few lines? Secondly do you have nothing to say in response to my argument that we should not presume that qualia—a concept with a seemingly unique quality of utter indefinability—are reducible?
It seems to me that the superintelligence can obtain full knowledge of the existence of qualia, because it necessarily experiences these qualia itself. Therefore, qualia are not “extra-physical” because there is no additional uncertainty surrounding them that is not uncertainty about the physical Universe. However, this only implies some kind of supervenience between qualia and brain states, not that qualia can necessarily be understood as a higher-order phenomenon composed of quarks. That is an empirical question, and based on the unique properties of the qualia concept I see no reason to assume that the thesis of reducibility generalises to include them.
So it’s physical but not based on quarks, neutrinos or anything like that?
I haven’t read it in a while. I remember thinking it could have been more clearly written.
It’s not about presumption. Like here:
It’s about considering all possibilities and noticing when multiple ones lead to the same place, one doesn’t have to presume as much as one might think.
The undefinability of a word isn’t a point in favor of the concept it is supposed to represent.
Possibly. Although this is not an idea that sits comfortably, it is no less “inconceivable” to me than the idea that qualia are reducible to quarks.
In that case, like Dennett you may be an eliminative materialist.
It’s not exactly inconceivable to me, as much as unlikely. What that situation would still imply is that a perfect picture of what all the mere atoms in a locality were doing—including those of speech—would have a readily discoverable flaw when there was a massive inability to predict what would happen at some point after photons hit eyes lasting until some point before the speech emanated. Then there would be a different kind of matter as a component of every mind with qualia—every human mind or nearly so, at least.
I may be what people call that but if you (or I) learn that such a label fits me after you (or I) learn my opinions on all the categories eliminative materialists opine on then you (or I) haven’t learned anything about me from the label. If I try to figure out if others would call me that then I won’t be able to taboo “eliminative materialist” for that inquiry.
The way I picture it is that we might obtain a detailed picture of the brain, but all we would find is that the brain has a parallel structure interpreted in a serial way, per Dennett. We would discover that in hindsight we already had a roughly accurate idea in 2011, with some gaps and flaws but no major missing piece, of internal narratives and why humans write philosophy papers, as Dennett has layed out in Consciousness Explained.
Eliminative materialists might then claim victory. However, I argue in my article that if a mind possesses sufficiently detailed information regarding a brain’s structure, then he actually experiences whatever qualia that brain produces because he runs exactly the same computations (and whatever inaccuracy there is in his information about that brain is the same degree of inaccuracy that exists in the similarity of the qualia he experiences).
Therefore, to the extent that a mind can accurately predict another mind’s precise behaviours, he experiences that brain’s qualia because he is essentially running that brain himself. Therefore, there is no additional uncertainty of qualia in addition to uncertainty about the physical configuration of the Universe, which is the precise subject Eliezer and Chalmers were arguing about.
nitpick: it’s still a probabilistic argument because, as you point out, there could be two totally unrelated mechanisms that produce speech talking about qualia and the actual experience of qualia. Obviously that’s super unlikely, but it’s still probabilistic.
Exactly my point. Because there should be no probabilistic element concerning qualia if there is no informational uncertainty about the physics of a given brain. Extremely (extremely extremely) likely to have qualia doesn’t cut it.
So I corrected Eliezer’s argument to state that it is us who are merely extremely confident that other human minds have qualia, and our hypothetical omniscient intelligence is certain about whatever qualia that exist in the same way that it is certain about physics. My best understanding of Eliezer’s argument is that the omniscient intelligence is merely extremely confident about qualia existing in given minds, which is not the same thing. But it’s easy to mix the two up by arguing imprecisely.
Comments have focused on my speculation about the irreducibility of qualia, but that was intended as more or less an aside to the main focus of the article.
To be fair, I came in to this thread expecting you to argue my nitpick and then I read some of your post and got confused about what you were arguing. I’m also a but confused about your comment. Where are you drawing the boundary around “a given brain”? Does it include any inaccessible qualia physics?
I’m not as such. But the existence of clusters in thingspace that correspond to the referent of our word “brain” is one of many (naturally) unspoken premises in the Yudkowsky-Chalmers debate. As such, I don’t believe that it is my duty to define “brain” precisely nor do I think that it is particularly relevant to the debate to do so.
You are right.
I meant it’s an argument that attempts to take an inside view into the causes of believing the relative extent to which nature is well-carved at “qualia”, rather than one that takes the outside view of “categories we create are better than chance at describing reality well, because our brains are neat that way, and I feel really confident they’re so good that they almost never mess up”.
I say later “Beliefs are probabilistic” so my beliefs aren’t outside the realm of healthy skepticism, the way Phlebas’ are as he he “suspects” what the outcome of this, as determined by further discovery/thought. My “belief” isn’t different in kind from his “suspicion” regarding the correct answers to these questions we are considering.
“Agreement” looks like a wrong word here if the word (“qualia”) is meant in different senses, and I expect it is.
(A similar situation was discussed in False Majorities.)
Vladimir, I agree with you and I am about to add an appendix to my original post to discuss this point (having been pushed towards that realisation by the comments here).
In fact, a rewrite would be ideal!
Does anybody have an experiment that would distinguish the following conditions?
1) All human beings are conscious.
2) 75% of human beings are conscious. The other 25% have the recessive zombie trait, mimicking consciousness without possessing it.
3) All humans with a Y chromosome are conscious, those without it are zombies.
4) All human beings except you are zombies, because this is actually the parallel zombie universe. You’re a cross-universe accident.
I’ll call her the Boltzmann baby.
Sorry, you begged the question here:
I like this post a lot! And I’ve agreed with most of the things you’ve said in the comments.
However, I think the problem you raise regarding Yudkowsky’s objection can be raised regarding your own. That the omniscient entity experiences qualia as a result of its perfect simulation of a human brain can be granted. Chalmers is not saying that the entity need be told a further fact about the existence of qualia not deducible from its physical computations, but rather that it needs further facts about the existence of other minds’ possessing qualia. It’s theoretically possible that its simulation of my brain produces qualia, but my simulation of my brain (which is, of course, just my brain) doesn’t. Of course, this feels terribly unlikely, and presumably the entity would justly assign a very high probability to my having conscious experiences identical to the ones that it had while perfectly simulating me. But this is nevertheless not a strict deduction from the physical facts. According to your post, merely having high confidence doesn’t cut it.
Firstly, thanks for the first positive feedback I’ve received!
The thought experiment postulates that the omniscient being possess perfect information (being omniscient!) about a certain volume of the Universe. As such, the computation it performs is a perfect likeness of the computation that occurs in your brain. Therefore, from our perspective as the people conducting the thought experiment, we believe with extremely high probability that the outcome of the thought experiment is that the omniscient being experiences the same qualia that the human in its sphere of understanding does.
To restate, our belief in this outcome of the thought experiment is merely extremely confident. But given this most probable outcome, since the computation is exactly the same, the qualia experienced by the omniscient being are certainly exactly the same (as it sees things). This is quite a subtle distinction!
What may also be confusing you is that the existence of “perfect” knowledge i.e. omniscience is unphysical—this is after all a thought experiment. But as I suggested in my article, I think the same principle applies if the being is not omniscient but merely possesses a detailed physical understanding of a volume of the Universe. All that changes is that the discussion becomes more long-winded. It is still the case that there is no uncertainty on the part of the superintelligence concerning qualia that is not directly related to uncertainty about physical configurations.
I’m not sure you took my point correctly. I am arguing that the omniscient entity, and not just us, can only be extremely confident that other people are having conscious experiences.
The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations. If they don’t supervene in this way, two identical computations may differ in the qualia they produce. Furthermore, certain knowledge of this supervenience is not built into the entity’s omniscience. So he lacks certain knowledge of my experiences as a result of his simulation, even while obtaining certain knowledge of his own. So even though his computations have led him to perfect knowledge of the configuration of all quarks and the like, he still lacks perfect knowledge regarding my qualia. This is the conclusion Chalmers is trying to arrive at.
Recall that we are taking as a given that qualia do in fact supervene upon brain states (regardless of whether the superintelligence knows this).
Now, the superintelligence is certain about your physical make-up, despite the fact that you are separate from it. If it performs a computation, which it is certain is the same one occurring in your brain, then when it experiences qualia it knows for certain that these qualia are caused by the computation. When it doesn’t run the computation, it doesn’t get the qualia. When it runs it, it does. Since it is certain that the computation is the same as yours, it is certain that you experience the same qualia. You see this does not depend on an abstract belief that certain computations bring about qualia, it gets to actually run the computation—which simply is the computation in your brain, and see for itself that qualia are produced.
I think that you are having trouble grasping this because there is no such thing as perfect certainty, and you are applying your realistic intuitions of fallibility to the idea—either that or I’m wrong!
Phlebas, like antigonus, I really enjoyed your essay (without agreeing with all of it). But the same objection that antigonus raises occurred to me. I’m not sure that you understood antigonus’s objection, so I will try to rephrase it in my words.
I follow you this far:
And I agree that the superintelligence then experiences qualia. But what I don’t see is why
Since you want to leave open the possibility that qualia are irreducible, you can’t assume that the superintelligence (SI) sees how the computation logically necessitates the generation of the qualia. The only alternative is that the SI reaches its conclusion through empirical observation. Indeed, this is how you describe the SI’s inference when you say,
But how can this kind of empirical observation provide that the SI with absolute certainty that the computation, and the computation alone, causes the qualia?
For example, how can the SI rule out the possibility that some nonphysical fact F applies to itself, but not to you (or the infant or whatever), and that [the computation + F] suffices to generate qualia, while [the computation—F] does not?
It seems that the SI has to leave open some small chance that, when it runs the computation, the computation generates qualia, but when you run the computation, you do not experience qualia because some additional nonphysical ingredient is missing. To deny this in a debate with Chalmers would seem to beg the question.
OK, a multi-paragraph summary first (skip it if you like; I feel it’s helpul to avoid any further arguing at cross-purposes) – since my position in the argument slowly morphed and became disorganised:
People claim to have “qualia”, which is different to mere Dennettian “consciousness” but can’t seemingly be defined. On examination of the brain, we will inevitably find some physical reason for why people discuss consciousness. It is highly improbable that this physical reason is unrelated to “qualia” OR “consciousness”. However, it is misleading to bundle these two together when discussing the subejct – and I do not accept the rather absurd claim that it’s OK to do so, because consciousness is a “confusion” – since arguments valid under certain assumptions about this “confusion” are not valid under other assumptions. In other words, Eliezer’s failure to attempt to distinguish reducible “qualia”, irreducible “qualia”, and qualia-eliminative “consciousness” as a preliminary step in his essay renders the essay liable to beg the question (question-begging seems to be the crux of the matter in general in this discussion) – unless he considers the irreducible qualia idea to be a priori nonsense.
If he does consider the irreducible qualia concept to be a priori nonsense:
a) Why not say so?
b) Why was such a misleadingly long essay necessary?
c) Why assume such a thing? OK, it doesn’t seem very Bayesian. But Bayes’s Theorem, Bayesian rationality and reductionism are just rules that apply perfectly to everything we’ve ever tried to apply them to – but in my ontology like most others’, there’s everything else and there’s qualia. There is no other concept, apart from “qualia”, that a supermajority of people affirm to be real in the absolute strongest terms – including people such as myself who are otherwise Bayesian reductionists – but which appears to be irreducible.
But anyhow, there is only an actual flaw in Eliezer’s refutation of Chalmer’s if we assume that “qualia” are real and irreducible. If qualia are real but irreducible, there must however be a reductionist causal explanation for physical humans discussing consciousness. I find the idea of our discovering a reductionist explanation of qualia, rather than mere consciousness, improbable. Therefore let us suppose that on examining the brain we discover a Dennettian causal explanation of our talking about consciousness, and then people are left to decide whether they accept this as a refutation of “qualia” or decide that such a belief is crazy and that qualia must be irreducible, existing apart from the physical cause of talk about consciousness.
Then, if we are not Dennettians we have very good reason still to believe that qualia supervene upon brain computations – presumably including the computations that constitute the physical reason for our discussing consciousness. Whatever happens to our brains physically, we experience our qualia changing synchronously and in qualitative relation. They may be “causally isolated” in the sense that we understand causality necessarily to involve reducible phenomena, but they “supervene” – when brain states change, qualia change likewise.
This distinction reveals the essentially question-begging nature of Eliezer’s talk about “causally closed outer Chalmers” being deranged – if we believe as seems to be the case that he dismisses the concept of irreduicble qualia out of hand. That is to say, the causal chain leading back from Chalmers’s hands typing on the keyboard about “qualia” leads precisely to the brain computations (viewed by other parts of the brain – per Dennett’s description) that are generating qualia – outer Chalmers is not deranged – but if we take a (somewhat) detailed look at a brain from outside, all we see is the brain examining itself in action; we might (naively?) assume that such a thing as “qualia” have been explained away. It’s only when a given brain apprehends another brain in sufficient detail (however much detail that may be) such that it is running approximately the selfsame computations, that it actually notices the qualia (like you said, in an empirical manner).
So, let us assume for the sake of argument that this belief is the accurate one regarding consciousness/qualia. I suspect that in believing in real, irreducible qualia I am somewhere between Eliezer’s and Chalmers’s stances, because it seems to me that Eliezer is not favourable towards such an idea, but pace Chalmers I do not consider there to be anything “extra-physical” about qualia – they are irreducible, but they supervene upon physical brain states therefore they are fully determined by mundane physical configurations of the Universe.
So, having tussled with Eliezer I still need to tussle with Chalmers. Perhaps Eliezer has done the job for me? Apparently not, because if we grant that there is a real, irreducible phenomenon “qualia”, Eliezer’s argument (if it applies at all) is simply that it’s improbable that humans would talk about having qualia, if they didn’t have qualia. This doesn’t prove that qualia are fully determined by physical configurations: seemingly a superintelligence that knows everything about some physical volume of the Universe is merely confident that beings inside (which do, in fact, experience qualia) have qualia.
Tyrrell, you are right in saying that I argue that the superintelligence concludes through empirical observation that these beings experience qualia.
You ask:
The superintelligence cannot rule that out. I agree, and I now understand Antigonus’s objection better too. However, can it rule any “non-physical fact F” out? What about the non-physical fact F that its supposedly perfect knowledge about a certain volume of the Universe is bunkum? Is there any limitation to the purely physical knowledge that “non-physical facts” can potentially undermine – even in the eyes of a (physically) omniscient being?
If not, is it not unfair to apply this standard – having to rule out the possibility of some “non-physical fact” disrupting expectations – to the superintelligence’s knowledge of qualia, but not to its knowledge of everything else?
You may argue that this is question-begging. However, our objective is to prove that an omniscient superintelligence knows about qualia just as much as it knows about physical brains – assuming that (from our perspective, with extremely high probability) qualia do supervene on brain states (and also assuming that qualia are real and irreducible, to make the discussion meaningful). And we have proven that: what the superintelligence, as an omniscient mind, does is effectively to take physical brains, inhabit them and see if they experience qualia.
If this wasn’t the case – if we were stumped: “Um yeah, I don’t see how this superintelligence knows if I have qualia” – then we might have to concede the point to Chalmers. It would appear that qualia were not fully determined by physical configurations, therefore they must be “extra-physical” rather supervening on brain states and being merely irreducible.
The difference is that the “non-physical fact” that you speak of is equally capable of undermining anything. It is fully general. If we were arguing with Chalmers about whether there are “non-physical facts” in general then I would be begging the question – that seems an a priori irresolvable argument. But what we are actually arguing about is whether we are forced to admit a specific, apparent gap in physics where a real phenomenon is seen to lack a physical underpinning. This would prove that there is at least one “non-physical fact”. In other words, we are not trying to prove the non-existence of non-physical facts in general (heaven forbid!) but merely to disprove the idea that there is any particular reason why we should believe that there are any non-physical facts.
I’m worried we’re talking past each other, since I would give largely the same reply as before.
The word “it” here is referring to the superintelligence correct? Because if so, this is the specific inference I’m disputing the superintelligence will legitimately make. As I wrote: “The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations.” (It would be helpful for me if you gave me a simple yes-or-no to this principle.) Even if we suppose ourselves to be certain of the supervenience (and therefore certain that the entity undergoes identical experiences to mine in the process of simulating me), what matters here is the superintelligence’s certainty around it. So in this scenario, there is no “regardless of whether the superintelligence knows qualia supervene upon brain states.”
Yes
I disagree with this
The superintelligence doesn’t need to know for certain the abstract fact that qualia supervene upon brain states. But in each case of a brain that does experience qualia, it too experiences qualia when it runs their computations. Since it knows that the computations are exactly the same, it knows or learns that in each specific case the brain in question is as a matter of fact producing qualia.
What it doesn’t learn (for certain) is whether the fully general condition always holds that human brains with similar-looking computations all have qualia – unless it were to entirely exhaust the space of possible minds which I suppose it does not. But that is unnecessary. We are only demanding (to vanquish “extra-physicality”) whether it knows for certain that the specific brains in its sphere of understanding have qualia. And since it is running their computations, which it is certain are theirs – i.e. it has incorporated their brains – it does so.
I suppose you might be objecting that one part of the mind might have imperfect knowledge about what the other part is doing, so it doesn’t “know” that it is actually experiencing qualia. But you might equally say that regarding communication across the mind about physical knowledge. So you see there is symmetry there between physical knowledge and knowledge of qualia, whether or not you want to postualte that the superintelligence also has perfect intra-brain communication.
O.K., you’re correct that full-fledged supervenience isn’t necessary. What the superintelligence needs to be certain is instead certain knowledge of the following weaker claim:
(1) Any two identical computational processes yield the same qualia if at some point the process is performed inside of the specific region R of the universe that the superintelligence is looking at.
But since the superintelligence can’t be certain of (1), either, it doesn’t really make a difference. If you disagree, how can the superintelligence deduce (1) from its complete description of the physical events in R? It seems to me that all it can deduce are A. the state of the matter in R at any particular time, and B. that its own performance of some of the processes in R yields qualia. But (1) is clearly not a logical consequence of A. and B.
Suppose an entity with qualia emerges in the Game of Life. Surely the omniscient being doesn’t have to have those qualia to predict perfecty (and, it seems, to have perfect “physical” knowledge of the simulation)?
I don’t see any difference. If the superintelligence is just watching the dots move around, then it isn’t predicting anything. If it knows exactly what the simulation is going to do, then it must be performing the same computations that the computer running the cellular automaton is doing. Amongst these computations must be the computations that generate the qualia that the entity in Game of Life experiences. Therefore the superintelligence also experiences the qualia.
To take it down to something easier for me to grasp than a superintelligence: if there is a color a rat associates with something bad, say green for foul food pellets, and a human has an enhancement enabling it to model that rat’s brain perfectly, and the human does, and that human associates that color green with something positive, like money, what predictions can we derive from the claim “the superintelligence also experiences the qualia” when the human simulates the rat seeing that color green?
Firstly, it would no longer be a “human” as most would understand the description. In fact, if it could model the rat’s brain perfectly it would be just as unphysical as our hypothetical omniscient being.
This “human” would be running ordinary typical-human-brain computations in one part of its massive brain, and rat-brain computations in another. Therefore different parts of the brain would be experiencing different qualia separately. It cannot “mix up” the computations otherwise they would obviously no longer be the same computations; so they do not interfere with one another.
The problem is really that we have no conception of a mind that experiences multiple qualia at once—but in fact this is what the superintelligence problem entails. We might very well view the superintelligence as an umbrella mind, incorporating different minds within itself whilst presumably possessing some centralised intelligence running computations that assess the different sub-units. I have no idea what that would be like!
Nonetheless, we see that perfect knowledge of the physical brain produces perfectly reproduced qualia in the observer. Whether we want to define this as the observer being split into different beings (so the human has simply become a human+rat) does not change our conclusion regarding “extra-physicalism”.
How do you know the only system that could carry a conversation about the weather and predict the next move of every rat molecule or atom or whatever relevant bit would be one that separated the processes?
Alternatively, considering that the only requirement for computation is locality, can’t it get ahead in one part and then take a break there, unlike a rat’s mushy brain? If the time gap doesn’t seem important enough just scale up physical size the mind in question, not necessarily its complexity.
I’m not really sure what a single quale is supposed to be. I had thought that there would be at least one for color, another for smell, but I can accept the idea that they are so specific that there is one per human mind state—in which case I’m not sure why something smarter than human would necessarily need more than just one bigger one, or where a cutoff might be in size, or why a cutoff might be in the first place.
There are some interesting papers out there relating the idea of data compression to intelligence.
human+(rat brain predictor)
It might not have to separate the processes completely I suppose, if there were exact similarities in the computations somewhere. But I meant that if the humanrat started experiencing a composite of qualia from rat and human, like seeing green money and feeling scared of it, then the humanrat cannot be using that to predict the behaviours of either rat or human. Or insofar as it is predicting the behaviours in a “distributed way” it is also experiencing all of the qualia in a “distributed way” and should be able to reintegrate these gaining full knowledge of the qualia.
I have no idea of the “rules of qualia” although it seems like the sort of thing we could potentially obtain “subjective” knowledge about. But I don’t see a real objection to my article emerging here.
There are a bunch of different sets and they have their pros and cons but whatever you do don’t use the 4th Edition rules of Qualia.
The more refining that is done to the concept the simpler it gets and the harder it is to suspend disbelief. It’s as if someone thought it would be possible to try and make rich characters from WoW avatars that are empty shells, as if the essence of a character’s richness could be stripped from complex, caused interactions with the environment. That’s not magic, it’s hand waving. So what if they would otherwise be computationally expensive?
I think you are making the assumption of strong AI: that a simulation of consciousness must necessarily be consciousness. Consider an omniscient being predicting the results of an H-bomb down to the quark level. Must the omniscient mind containing the simulation reach temperatures of 10,000 F? Must it reach overpressures sufficient to collapse its own mind? I think not. SO presumably there is a way to simulate consciousness without experiencing it, if it is material.
The first, as I think Yudkowsky states, is that qualia are not very well defined. Human introspection is unreliable in many cases, and we’re only consciously aware of a subset of processes in our brains. This means that the fact that zombies are conceivable doesn’t mean they are logically possible. When we examine what consciousness entails in terms of attention to mental processes, zombies might be logically impossible.
Second, one of the false intuitions humans have about consciousness goes something like this:
“If I draw up a schematic or simulation of my brain seeing a red field, I, personally, don’t then see what it is like to see the color red. Therefore, my schematic cannot be the whole story.”
Of course, this intuition is completely silly. A model of my brain doing something isn’t going to produce qualia in my own mind. Nevertheless, I think this intuition drives the Mary thought experiment. In the Mary experiment, Mary is omniscient about color and human vision and cognition, but has lived in a black and white environment all her life. When she sees red for the first time, she knows something more than she did before. (Though Dennet would say she now simply knows she can see the color red.)
As Bayesian reasoners, we have to ask ourselves, what might we expect if qualia do (versus do not) reduce to mechanistic processes?
If qualia do reduce to physics, then we would still find ourselves in the same situation as Mary. We don’t expect models of brains to produce qualia in the brains of the modeler. At the same time, there are good reasons to expect physical brains to have qualia as Antonio Damasio has described in Self Comes To Mind. On the other hand, if qualia could have had any conceivable value, why should they have happened to be the qualia consistent with reduction? Why couldn’t seeing a red field produce qualia consistent with seeing elephants on Thursdays?
Another way of putting this is to say that reductive inference isn’t expected to create qualia in the reasoner. When I model water as H2O, my model doesn’t feel moist! Rather, the inference works because the model predicts facts about water that didn’t have to be that way if water didn’t reduce. Similarly, reduction of minds to brains need not produce actual qualia in theorists. The theorists need only show that the alternatives get crushed in Bayesian fashion. The Mary experiment was supposed to show that reductionism was impossible, but it fails because the apparent qualia gap would exist whether or not we are mechanical.
I think what you are saying is that if we possessed detailed understanding of a mind, we might discover a reductive explanation of qualia. That is true, but for reasons given in my article it is unwarranted to assume that we would do so. And if it is a warranted assumption, do you agree (as I demonstrated in my article) that Yudkowsky could and therefore should have chosen to refute Chalmers in three sentences?
This is equivocation of the concept of a model. If you have a simplified model in the form of a schematic on a piece of paper, then this is not going to produce in your brain the computations that we know with extreme likelihood (per Yudkowsky’s original argument) produce qualia. On the other hand, in the Mary thought experiment Mary has an incredibly large brain. Since she has by definition (yes indeed) a perfect “model” of a brain her model is in fact the brain itself, therefore her mind runs the same computations and (with extreme likelihood) produces the same qualia.
I think that people get thrown by imagining Mary as a human female, rather than a being of immense size.
If we change the zombie thought experiment to suppose that the being in question is less than omniscient, then it becomes more complicated. But even an approximate model of a brain, computationally accurate to 10 decimal places rather than to infinity, will obviously produce qualia and I submit that the uncertainty surrounding these qualia (in comparison to the original brain’s qualia) is no more than the uncertainty surrounding the physical state of the original brain – whereas in Yudkowsky’s argument version 1 as I summarised it, there is additional (albeit minute) uncertainty about the existence of these qualia.
If you object that a superintelligence could possess a model without this being “inside its mind”, I think that is beside the point of the thought experiment. Insofar as the superintelligent observer knows about the physical state of a volume of the Universe, it is expected to have no more uncertainty about qualia experienced within that volume than exists due to limitations of its physical understanding. If it possesses a model that produces accurate predictions regarding the physical behaviours of the humans in this volume of the Universe, the model must itself be running the computations that occur inside the brains of those humans. If the superintelligence is letting the model do all the work, then it is the “model” that is experiencing qualia since it is running the computations, and the superintelligence is a red herring since it is does not actually know anything about the physical state of said volume of the Universe. We have simply redefined the superintelligent observer to be some other process that runs the computations occurring inside human brains.
I think I’m saying more than this. We might find that it is impossible for beings like ourselves to not have qualia. By analogy, consider the Goldbach conjecture. It’s possibly try but not provable with a finite proof. But it’s also possibly false, and possibly provably so with a finite proof. It’s conceivable that the Goldbach conjecture is true, and conceivable that it is false, but only one of the two cases is logically possible.
I’m afraid I don’t see this. If qualia can be understood in terms of a model, then we can show that it reduces. But having a brain is not the same thing as having a model of a brain. Children have brains and can be certain of their qualia, but they have no model of their cognition.
The qualia that Chalmers is talking about is what distinguishes first-person experience from third-person experience. Even knowing everything material about how you think and behave, I still don’t know what your first-person experience is like in terms of my own first-person experience. In fact, knowing another person’s first-person experience in terms of my own might not be possible because of indeterminacy of translation. Even being in possession of a perfect model of your brain doesn’t obviously tell me exactly what your first-person experience is like. This is the puzzle that drives the zombie/anti-reductionist stance.
What I am saying beyond this is two-fold. First, even if the perfect model is of my own brain, there’s still a gap between my first-person experience and my “third-person” understanding of my own brain. In other words, finding a gap isn’t evidence for non-reductionism.
Second, the gap doesn’t invalidate the reductive inference if the reductive inference wouldn’t allow you to bridge the gap in any case.
How does this weigh on the zombie argument?
Well, frankly, we’re a lot more confident in physicalism based on the evidence than we are in the lack of flaws in the zombie argument.
It’s certainly possible that we’re talking at cross purposes or that I don’t understand your claim. Are you making a distinction between first-person experience and third-person knowledge of brains? The typical philosopher’s response would be that a superintelligence has exactly the same problem as we do.
I have never understood why people think this is worth discussing. The “zombie” argument against physicalism is just bonkers—because it is not based on any actual evidence. Chalmers supporters need to show us the purported zombie with the purported identical atomic state and somehow show that it lacks qualia which the original has—and then we can have a meaningful discussion. Without such evidence—and in the absence of other evidence against physicalism—I don’t see much point in entertaining the possibility of such an outcome.
Firstly, because no-one can honestly claim that qualia are an easy concept to grasp. Secondly, because many philosophers hold a false belief about p-zombies. Thirdly (from my perspective) Eliezer took 6,600 words and still failed to refute Chalmers. If that isn’t good cause for writing about a subject, I don’t know what is.
As I understand it, the zombie argument is about a thought experiment concerning a being that possesses complete physical knowledge of (a subset of) the Universe. As such, there is no reason to expect it to be based on empiricism. Thought experiments are a perfectly valid means of investigating certain philosophical problems.
A “thought experiment” is an experiment which you perform in your head (often because performing the actual experiment would be impractical). The problem with zombies is that a zombie is defined as being experimentally indistinguishable from the original. There simply isn’t an experiment that could distinguish between them.
I don’t know if this is just wordplay, but I like it.
Actually the experiment need make no mention of the existence of actual p-zombies. The argument is specifically about whether a mind that knows all physical details, knows everything about qualia—whether it can be sure that all beings that we postulate to possess qualia actually do possess qualia.
I’m not sure that is all there is to a thought experiment. Quantum Suicide is described as a thought experiment and suffers a somewhat similar problem.
While there presumably would be a branch in which the subject will find at least a positive result (they observe themselves surviving long past the odds say they should) that is a completely subjective result. From the outside view they just see the Born probabilities; we can’t expect any empirical difference in running this experiment but it is still a useful thought experiment because it challenges us to rigorously accept some of the less intuitive consequences of MWI.
Performing quantum suicide would—very briefly—allow you to learn whether you were still alive or about to die—which seems as though it might be some kind of result.
Searle’s “Chinese room” appears to me to be another dodgy non-experiment, that is still described as being a thought experiment—since Searle apparently doesn’t dispute that the room can actually speak Chinese. Maybe we need the concept of a fake thought experiment—to help distinguish between the science and the baloney.
Say “intuition pump” and describe reality with other symbols than a single metaphor.
I disagree. The conversation could be sensibly had, philosophical conjectures like this are important and relevant for things such as designing artificial minds. It only seems like it was never worth discussing because the pro-mystical side hasn’t given up despite their project being logically incoherent. A working model would be proof of logical consistency but it’s too much to ask for.
Is that a fair representation of Yudkowsky’s argument? As summarized, it’s a purely circular argument: We know that a superbeing would infer that humans have qualia from the fact that humans talk about having qualia, because it is extraordinarily improbable that beings would talk about having qualia if they did not actually possess qualia.
Or, more briefly: We can infer that humans have qualia from the fact that humans talk about having qualia, because it is extraordinarily improbable that beings would talk about having qualia if they did not have qualia.
I feel like you get a bit distracted with predicting responses, but don’t devote the same amount of time to getting the initial arguments right.
Chalmers, in exploring the idea of “epiphenomenal” consciousness, in fact accepts that his zombie double would write papers about consciousness for the same causal reasons as himself, yet not be conscious. If we remember that words are human constructs, this is bad, since saying “conscious” would give no information to things that say “conscious”. If we would like words to be meaningful, consciousness, as defined by humans, is a thing that affects the universe.
Me thinking through your argument requires the carbon-based (loosely) probabilistic-reasoning-system who goes by the pseudonym “Minibear Rex” on the internet to predict the conclusions drawn by a non-probabilistic reasoning being. This is rather difficult to do, and rather easy to screw up somewhere in the chain of inferences. It is also not necessary to formulate the chief argument against Zombies.
I don’t need to postulate an omnipotent being. All I need to do is make the observation that my fellow humans, when the topic arises, talk about their own sense of internal awareness, their own subconscious experiences of thoughts and feelings, etc. Qualia, in other words. It seems to me rather unlikely that they would do so if they did not, in fact, possess qualia, and rather likely that they would do so if they did. Therefore, this is strong Bayesian evidence against the existence of zombies.
I argue that this isn’t proving the point that needs to be proved. If a being is even the merest iota more uncertain about the qualia status of beings than it is about their physical constitution, then it would appear that qualia are extra-physical. I believe that I have made this clear in my argument, quote:
The distinction is that we can be extremely confident (by Yudkowsky’s reasoning) that the omniscient mind will itself be certain (by my reasoning) about the existence of qualia within the volume of the Universe about which it has perfect knowledge – whereas if one is trying to prove that qualia are not extra-physical, it insufficient to argue (as Yudkowsky did) that the omniscient mind will itself only be extremely confident about the existence of qualia within the volume of the Universe about which it has perfect knowledge.
This difference is fundamentally important to the argument, and the fact that it was either not made explicit or ignored by Yudkowsky is why his argument in the post I linked to is unsatisfactory. If we mix up our own beliefs about qualia and the beliefs of the putative being who possesses full knowledge of the physical Universe, then we are talking at cross purposes to Chalmers.
I distinguish between the (alleged) property of being “extra-physical” and the property of being “irreducible”. I also believe that these need to be distinguished if we are to think precisely about qualia.
I gave my own explanation of why qualia are not extra-physical, in the 6th and 7th paragraphs. However, according to this explanation the superintelligence obtains knowledge of qualia only by experiencing qualia in the same sense that we do, i.e. as a phenomenon that may not necessarily be reducible.
I argue that we have reason to suspect that qualia may be the only irreducible concept in our Universe. Irreducibility does not imply that the concept involves any additional uncertainty beyond uncertainty about the Universe’s physical make-up. If on the other hand we are permitted to simply presume that qualia are reducible, then the second version of Eliezer’s argument is legitimate. However, I point out that in this case a lengthy debate could have been avoided and instead Eliezer should have refuted Chalmers in three sentences.
OK, let’s set the dirty work of having to interpret others aside for a moment. Let’s also play “taboo” and not use ambiguous words like “qualia”.
Words are reducible and physical. When I say “I see a blue sky” it’s the result of a bunch of small physical actions in my brain. Quarks aren’t blue, so at some level there isn’t blue, but at a higher level, many of these not-blue things make blue. Blue is irreducible past a certain point, just like the 747. But that’s not terribly special, since the non-747 pieces are big chunks we can talk about, and the non-blue pieces are big chunks we can talk about. Is this not enough irreducibility for you? Do you think others deny this amount of irreducibility, that there is a minimum size for a blue thing, a 747, or a word made of vibrations in air, or any other pattern made of discrete smaller pieces?
Maybe try this one again, with out using the word “qualia”? If the cause of the physical words is extra-physical, then there is an interface between the extra-physical and the physical, no? A place where the atoms are perturbed by magic energy from outside the physical system?
Lessdazed, I don’t believe that qualia are extra-physical as you seem to be alleging. Irreducibility and extra-physicality do not mean the same thing, as I intend them. Perhaps the comment I just posted in another reply to you clarifies my position?
Part of my essay discusses the fact that “qualia” is exactly the kind of word that cannot be tabooed. We only have synonyms, like “consciousness” and “awareness”. I proceed to suggest that this is evidence that qualia are in fact irreducible.
If you have a problem with this, then you surely have the same problem with Eliezer and Chalmers’s entire argument. I am quite sure that neither of them would able to rationally define (i.e. reduce) “qualia” despite the fact that this is what their argument relates to.
Apart from qualia, I am entirely in agreement with the thesis of reducibility.
I’m not alleging that, my beliefs are based off of a closing of some possibilities so my argument reminds me that they are closed and not available as solutions to problems elsewhere. If I were in a sinking ship, I wouldn’t want to skitter between lifeboats telling myself: “Lifeboat A has a hole in the bottom-front! Better go to lifeboat B! Lifeboat B has a hole in the bottom-rear, better go back to lifeboat A, that doesn’t have that problem!”
“qualia” is exactly the kind of word that cannot be tabooed.
If the question about reality is “what do people mean by word ‘X’”, then word ‘X’ cannot be tabooed. So inability to be tabooed is at least a matter of context for any word. I can’t think of why else a word wouldn’t be tabooable because only in such cases (or similar) would the word be a feature of the world rather than a label used to map features of the world.
If the question about reality is about other than what people mean, then no particular label is necessary.
There is a resolution other than explaining what phenomenon is well described by a label as an explanation for why they use that label. That is to describe what confusion explains people’s use of a label.
The core question isn’t “what is the real meaning of ‘qualia’”, the mystery that inability to answer that question represents is more abstractly a mystery as to why “qualia” is being used as it is. So “what is the real reason people use ‘qualia’ to describe their inner phenomena”? That’s a good question to answer.
It’s guaranteed to have an answer in a way the question “what are qualia?” is not.
That’s a Yudkowskian concept that might be applied for example to the question “why do I have free-will?”—instead we can ask “why do I think I have free will?”.
But if both parties to a debate were to accept that we do in fact have free will, and proceed to argue from that, then I would not be at fault for assuming the existence of free will (standing in for qualia) as a real thing—and proceeding to discuss unique problems surrounding the concept of free will.
If the existence of qualia were not a shared premise in the Yudkowsky-Chalmers debate, then it would be an entirely different debate about eliminative materialism.
Since we almost all agree that qualia are real, we can have arguments about the nature of qualia. It is then legitimate to use the fact that although we agree upon the existence of qualia, we can’t define qualia, as an argument for the special irreducible status of qualia.
If we could define qualia reductively, that would disprove my point. But I believe that even if you were to use the technique of “righting a wrong question”, it still wouldn’t enable you to achieve this. This is strange, because the technique does indeed help in defining other confusing concepts. In other words, it would delight me if you managed to use this technique to demonstrate that qualia are reducible, but I don’t expect you to be able to do so and that is part of my argument.
There are, as I see it, three solutions to the apparent problem of defining qualia:
You continue to believe that qualia are both real and reducible. When we investigate the brain further, we will obtain a reduction of qualia.
You deny the existence of qualia.
You suspect that qualia are both real and irreducible.
I lean towards 3 instead of 1, and reject 2 as ludicrous. You seem to prefer either 1 or 2.
Just to clarify, does “irreducible” in (3) also mean that qualia are therefore extra-physical?
I assume that we are all in agreement that rocks do not have qualia and that dead things do not have qualia and that living things may or may not have qualia? Humans: yes. Single cell prokaryotes: nope.
So doesn’t that leave us with two options:
1) Evolution went from single cell prokaryotes to Homo Sapiens and somewhere during this period the universe went “plop” and irreducible qualia started appearing in some moderately advanced species.
2) Qualia are real and reducible in terms of quarks like everything else in the brain. As evolution produced better brains at some point it created a brain with a minor sense of qualia. Time passed. Brains got better and more introspective. In other words: qualia evolved (or “emerged”) like our sense of smell, our eyesight and so forth.
Not unless we are arguing over definitions. Tabooing the phrase “extra-physical”, what Eliezer and Chalmers were arguing (or trying to argue) about is whether a superintelligent observer, with full knowledge of the physical state of a brain, would have the same level of certainty about the qualia that the brain experiences as it does about the physical configuration of the brain.
Actually, if they had phrased the debate in those terms it would have turned out better. I don’t think that what they were arguing about was clearly defined by either party, which is why it has been necessary (in my humble opinion) for me to “repair” Eliezer’s contribution.
So anyway, no it does not mean the same thing. I argue that qualia are not “extra-physical”, because the observer does in fact have the same level of knowledge about the qualia as it does about the physical Universe. However, this only proves that qualia supervene upon physical brain states and does not demonstrate that qualia can ever be explained in terms of quarks (rather than “psycho-physical bridging laws” or some such idea).
It might be tempting to refer to (a degree of) belief in irreducibility of qualia as “non-physical”, but for the purposes of this discussion it would confound things.
I don’t think that there’s a good reason why you didn’t describe qualia as “plopping” into existence in scenario 2 as well, or else in neither scenario. Since (with extreme likelihood) qualia supervene upon brain states whether they are irreducible or reducible, the existence of suitable brain states (whatever that condition may be) seems likely to be a continuous rather than discrete quality. “Dimmer” qualia giving way to “brighter” qualia, as it were, as more complex lifeforms evolve.
Note the similarity to Eliezer’s post on the many worlds hypothesis here.
Thanks for the clarifications.
Honestly, I don’t have a clear picture of what exactly you’re saying (“qualia supervene upon physical brain states”?) and we would probably have to taboo half the dictionary to make any progress. I get the sense you’re on some level confused or uncomfortable with the idea of pure reductionism. The only thing I can say is that what you write about this topic has a lot of surface level similarities with the things people write when they’re confused.
If each has an intelligible definition for “free will”, and they are the same, then there is agreement that it exists. If the definitions are different, then they should use different words to not become confused and think they think of the same thing from “free will”. A less good option is for one party to adopt the other’s meaning for the discussion. If each were confused about what he or she meant...that would be bad.
That doesn’t best describe what Yudkowsky took as the basis for discussion. Yudkowsky talked about “mysteriousness” and what physical process underlay consciousness, but not “qualia”.
Verbally agreeing that whatever is represented by the label “qualia” is real while each having a different meaning for that label is a recipe for disagreement, particularly if we believe that the label has only one definition, if only because we each agreed to that as well.
I hadn’t focused on this earlier, but I don’t think this is a special situation. People disagree because many or all are wrong, it happens all the time.
Tell me exactly what it is you want defined, and I’ll define it for you. ;-)
That’s not the type of thing that righting a wrong question does. If the question had an answer that fit its assumptions it wouldn’t need righting. The assumptions I’m proposing tossing are that the use of a label implies the existence of a thing that falls out once one carves reality at its joints and that mutual use of a single label logically necessitates agreement about interpretations of reality.
Yudkowsky didn’t say “qualia” in the essay, and had he it wouldn’t have committed him to beliefs similar to Chalmers’ and the answer to the question “why do people say ‘qualia’?” isn’t that it’s a feature of reality that is importantly distinct from others and thus needs a label, but people are confused and in their map of reality the single blotch of confusion is well covered by a single word. It may come to pass that their confusion is replaced by belief that adjacent concepts are all that is needed to explain reality where they were previously confused, and they will expand the territory on their map marked “reductionism” or similar and be left with no landmark of reality to affix the label “qualia” to.
Alternatively, carving nature at its joints may leave experienced illusions of non-agency important enough to be labeled “the County of Qualia” on the map, in the “Country of Things Reduced to Understanding”.
Beliefs are probabilistic. I don’t think any particular undiscovered thing has the inherent property of being inevitably learned.
One thing is for sure, I don’t deny the existence of “qualia”s as labels, just qualia by some people’s definitions, but perhaps not qualia under everyone’s definition.
Except in the traditional sense. And I’m all for that. ;)
Personally I’m in favor of the “call down Snatchers on your location” version, but that’s just me.