You’re still getting voted up on net, despite not explaining how, as you’ve claimed, the psychological fact of p-zombie plausibility is evidence for it (at least beyond references to long descriptions of your general beliefs).
I believe he’s trying to draw a distinction between two potential sources of evidence:
The factual claim that people believe zombies are conceivable, and
The actual private act of conceiving of zombies.
Richard is saying that his justification for his belief that p-zombies are conceivable lies in his successful conception of p-zombies. So what licenses him to believe that he’s successfully conceived of zombies after all? His answer is that he has direct access to the contents of his conception, in the same way that he has access to the contents of his perception. You don’t need to ask, “How do I know I’m really seeing blue right now, and not red?” Your justification for your belief that you’re seeing blue just is your phenomenal act of noticing a real, bluish sensation. This justification is “direct” insofar as it comes directly from the sensation, and not via some intermediate process of reasoning which involve inferences (which can be valid or invalid) or premises (which can be true or false). Similarly, he thinks his justification for his belief that p-zombies are conceivable just is his p-zombie-ish conception.
A couple of things to note. One is that this evidence is wholly private. You don’t have direct access to his conceptions, just as you don’t have direct access to his perceptions. The only evidence Richard can give you is testimony. Moreover, he agrees that testimony of this sort is extremely weak evidence. But it’s not the evidence he claims that his belief rests on. The evidence that Richard appeals to can be evidence-for-Richard only.
Another thing is that the direct evidence he appeals to is not “neutral.” If p-zombies really are inconceivable, then he’s in fact not conceiving of p-zombies at all, and so his conception, whatever it was, was never evidence for the conceivability of p-zombies in the first place (in just the same way that seeing red isn’t evidence that you’re seeing blue). So there’s no easy way to set aside the question of whether Richard’s conception is evidence-for-him from the question of whether p-zombies are in general conceivable. The worthiness of Richard’s source of evidence is inextricable from the actual truth or falsehood of the claim in contention, viz., that p-zombies are conceivable. But he thinks this isn’t a problem.
If you want to move ahead in the discussion, then the following are your options:
You simply deny that Richard is in fact conceiving of p-zombies. This isn’t illegitimate, but it’s going to be a conversation-stopper, since he’ll insist that he does have them but that they’re private.
You accept that Richard can successfully conceive of p-zombies, but that this isn’t good evidence for their possibility (or that the very notion of “possibility” in this context is far too problematic to be useful).
You deny that we have direct access to anything, or that access to conceptions in particular is direct, or that one can ever have private knowledge. If you go this route, you have to be careful not to set yourself up for easy reductio. Specifically, you’d better not be led to deny the rationality of believing that you’re seeing blue when, e.g., you highlight this text.
I hope this helps clear things up. It pains me when people interpret their own confusion as evidence of some deep flaw in academic philosophy.
I believe he’s trying to draw a distinction between two potential sources of evidence:
The factual claim that people believe zombies are conceivable, and
The actual private act of conceiving of zombies.
I was very deliberately ignoring this distinction: “people” includes Richard, even for Richard. The point is that Richard cannot simply trust his intuition; he has to weigh his apparent successful conception of zombies against the other evidence, such as the scientific success of reductionism, the findings from cognitive science that show how untrustworthy our intuitions are, and in particular specific arguments showing how we might fool ourselves into thinking zombies are conceivable.
The evidence that Richard appeals to can be evidence-for-Richard only
If p-zombies really are inconceivable, then he’s in fact not conceiving of p-zombies at all, and so his conception, whatever it was, was never evidence for the conceivability of p-zombies in the first place...The worthiness of Richard’s source of evidence is inextricable from the actual truth or falsehood of the claim in contention
This is a confusion of map and territory. It is possible to be rationally uncertain about logical truths; and probability estimates (which include the extent to which a datum is evidence for a proposition) are determined by the information available to the agent, not the truth or falsehood of the proposition (otherwise, the only possible probability estimates would be 1 and 0). It may be rational to assign a probability of 75% to the truth of the Riemann Hypothesis given the information we currently have, even if the Riemann Hypothesis turns out to be false (we may have misleading information).
If you want to move ahead in the discussion, then the following are your options:
My position could be described by any of those three options—in other words, they seem to differ only in the interpretation of terms like “conceivable”, and don’t properly hug the query.
1.You simply deny that Richard is in fact conceiving of p-zombies.
I must do so to the extent I believe zombies are in fact inconceivable. But I don’t see why it should be a conversation-stopper: if Richard is right and I am wrong, Richard should be able to offer evidence that he is unusually capable of determining whether his apparent conception is in fact successful (if he can’t, then he should be doubting his own successful conception himself).
2.You accept that Richard can successfully conceive of p-zombies, but that this isn’t good evidence for their possibility
I can assent to this if “conceive” is interpreted in such a way that it is possible to conceive of something that is logically impossible (i.e. if it is granted that I can conceive of Fermat’s Last Theorem being false).
3. You deny that we have direct access to anything, or that access to conceptions in particular is direct, or that one can ever have private knowledge.
“Private knowledge” in this sense is ruled out by Aumann, as far as I can tell. As for “direct access”, well, that was Eliezer’s original point, which I agree with: all knowledge is subject to some uncertainty due to the flaws in human psychology, and in particular all knowledge claims are subject to being undermined by arguments showing how the brain could generate them independently of the truth of the proposition in question. (In other words, the “genetic fallacy” is no fallacy, at least not necessarily.)
Specifically, you’d better not be led to deny the rationality of believing that you’re seeing blue when, e.g., you highlight this text.
I was very deliberately ignoring this distinction: “people” includes Richard, even for Richard. The point is that Richard cannot simply trust his intuition; he has to weigh his apparent successful conception of zombies against the other evidence, such as the scientific success of reductionism, the findings from cognitive science that show how untrustworthy our intuitions are, and in particular specific arguments showing how we might fool ourselves into thinking zombies are conceivable.
I don’t think Richard said anything to dispute this. He never said that his direct access to the conceivability of zombies renders his justification indefeasible.
This would appear to violate Aumann’s agreement theorem.
“Private knowledge” in this sense is ruled out by Aumann, as far as I can tell.
This is not a case in which you share common priors, so the theorem doesn’t apply. You don’t have, and in fact can never have, the information Richard (thinks he) has. Aumann’s theorem does not imply that everyone is capable of accessing the same evidence.
This is a confusion of map and territory. It is possible to be rationally uncertain about logical truths; and probability estimates (which include the extent to which a datum is evidence for a proposition) are determined by the information available to the agent, not the truth or falsehood of the proposition (otherwise, the only possible probability estimates would be 1 and 0). It may be rational to assign a probability of 75% to the truth of the Riemann Hypothesis given the information we currently have, even if the Riemann Hypothesis turns out to be false (we may have misleading information).
That’s certainly true, but I can’t see its relevance to what I said. In part because of some of the very reasons you name here, we can be mistaken about whether an observation O confirms a hypothesis H or not, hence whether an observation is evidence for a hypothesis or not. If the hypothesis in question concerns whether O is in fact even observable, and my evidence for ~H is that I’ve made O, then someone who strongly disagrees with me about H will conclude that I made some other observation O’ and have been mistaking it for O. And since the observability of O’ doesn’t have any evidentiary bearing on H, he’ll say, my observation wasn’t actually the evidence that I took it to be. That’s the point I was trying to illustrate: we may not be able to agree about whether my purported evidence should confirm H if we antecedently disagree about H. [Edited this sentence to make it clearer.]
But I don’t see why it should be a conversation-stopper: if Richard is right and I am wrong, Richard should be able to offer evidence that he is unusually capable of determining whether his apparent conception is in fact successful (if he can’t, then he should be doubting his own successful conception himself).
I don’t really see what this could mean.
As for “direct access”, well, that was Eliezer’s original point, which I agree with: all knowledge is subject to some uncertainty due to the flaws in human psychology, and in particular all knowledge claims are subject to being undermined by arguments showing how the brain could generate them independently of the truth of the proposition in question. (In other words, the “genetic fallacy” is no fallacy, at least not necessarily.)
Richard didn’t state that his evidence for the conceivability of zombies is absolutely incontrovertible. He just said he had direct access to it, i.e., he has extremely strong evidence for it that doesn’t follow from some intermediary inference.
This is not a case in which you share common priors
Why not?
Postulating uncommon priors is not to be done lightly: it imposes specific constraints on beliefs about priors. See Robin Hanson’s paper “Uncommon Priors Require Origin Disputes”.
In any case, what I want to know is how I should update my beliefs in light of Richard’s statements. Does he have information about the conceivability of zombies that I don’t, or is he just making a mistake?
If the hypothesis in question concerns whether O is in fact even observable, and my evidence for ~H is that I’ve made O, then someone who strongly disagrees with me about H will conclude that I made some other observation O’ and have been mistaking it for O. And since the observability of O’ doesn’t have any evidentiary bearing on H, he’ll say, my observation wasn’t actually the evidence that I took it to be. That’s the point I was trying to illustrate: we may not be able to agree about whether my purported evidence should confirm H if we antecedently disagree about H.
In such a dispute, there is some observation O″ that (both parties can agree) you made, which is equal to (or implies) either O or O’, and the dispute is about which one of these it is the same as (or implies). But since O implies H and O’ doesn’t, the dispute reduces to the question of whether O″ implies H or not, and so you may as well discuss that directly.
In the case at hand, O is “Richard has conceived of zombies”, O’ is “Richard mistakenly believes he has conceived of zombies”, and O″ is “Richard believes he has conceived of zombies”. But in the discussion so far, Richard has been resisting attempts to switch from discussing O (the subject of dispute) to discussing O″, which obviously prevents the discussion from proceeding.
Because, again, you do not have access to the same evidence (if Richard is right about the conceivability of zombies, that is!). Robin’s paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree. To reiterate, Richard (as I understand him) believes that you and he do not share the same information.
In any case, what I want to know is how I should update my beliefs in light of Richard’s statements.
Well, you shouldn’t take his testimony of zombie conceivability as very good evidence of zombie conceivability. In that sense, you don’t have to sweat this conversation very much at all. This is less a debate about the conceivability of zombies and more a debate about the various dialectical positions of the parties involved in the conceivability debate. Do people who feel they can “robustly” conceive of p-zombies necessarily have to found their beliefs on publicly evaluable, “third-person” evidence? That seems to me the cornerstone of this particular discussion, rather than: Is the evidence for the conceivability of p-zombies any good?
In such a dispute, there is some observation O″ that (both parties can agree) you made, which is equal to (or implies) either O or O’, and the dispute is about which one of these it is the same as (or implies). But since O implies H and O’ doesn’t, the dispute reduces to the question of whether O″ implies H or not, and so you may as well discuss that directly.
Yes, that’s the “neutral” view of evidence Richard professed to deny.
The actual values of O and O’ at hand are “That one particular mental event which occurred in Richard’s mind at time t [when he was trying to conceive of zombies] was a conception of zombies,” and “That one particular mental event which occurred in Richard’s mind at time t was a conception of something other than zombies, or a non-conception.” The truth-value value of the O″ you provide has little bearing on either of these.
EDIT: Here’s a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridiculous and point to all sorts of icky human biases tainting our judgment. How do you update your belief that you’re experiencing red?
EDIT 2: Looking over this once again, I think I should be less glib in my first paragraph. Note that I’m denying that you share common priors, but then appealing to different evidence that you have to explain why this can be rational. If the difference in priors is a result of the difference in evidence, aren’t they just posteriors?
The answer I personally would give is that there are different kinds of evidence. Posteriors are the result of conditionalizing on propositional evidence, such as “Snow is white.” But not all evidence is propositional. In particular, many of our introspective beliefs are justified (when they are justified at all) by the direct access we have to our own experiences. Experiences are not propositions! You cannot conditionalize on an experience. You can conditionalize on a sentence like “I am having experience E,” of course, but the evidence for that sentence is going to come from E itself, not another proposition.
Robin’s paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree.
This is not correct. Even the original Aumann theorem only assumes that the Bayesians have (besides common priors) common knowledge of each other’s probability estimates—not that they share all the same information! (In fact, if they have common priors and the same information, then their posteriors are trivially equal.)
Robin’s paper imposes restrictions on being able to postulate uncommon priors as a way of escaping Aumann’s theorem: if you want to assume uncommon priors, certain consequences follow. (Roughly speaking, if Richard and I have differing priors, then we must also disagree about the origin of our priors.)
In any event, you do get closer to what I regard as the point here:
Experiences are not propositions! You cannot conditionalize on an experience.
Another term for “conditionalize” is “update”. Why can’t you update on an experience?
The sense I get is that you’re not wanting to apply the Bayesian model of belief to “experiences”. But if our “experiences” affect our beliefs, then I see no reason not to.
The actual values of O and O’ at hand are “That one particular mental event which occurred in Richard’s mind at time t [when he was trying to conceive of zombies] was a conception of zombies,” and “That one particular mental event which occurred in Richard’s mind at time t was a conception of something other than zombies, or a non-conception.” The truth-value value of the O″ you provide has little bearing on either of these.
In these terms, O″ is simply “that one particular mental event occurred in Richard’s mind”—so again, the question is what the occurrence of that mental event implies, and we should be able to bypass the dispute about whether to classify it as O or O’ by analyzing its implications directly. (The truth-value of O″ isn’t a subject of dispute; in fact O″ is chosen that way.)
Here’s a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridiculous and point to all sorts of icky human biases tainting our judgment. How do you update your belief that you’re experiencing red?
It goes down, since the tribespeople would be more likely to say that if there is no visual experience than if there is. Of course, the amount it goes down by will depend on my other information (in particular, if I know they’re congenitally blind, that significantly weakens this evidence).
I would categorize my position as somewhere between 1 and 2, depending on what you mean by “conceiving”. I think he has a name attached to some properties associated with p-zombies and a world in which they exist, but this doesn’t mean a coherent model of such a world is possible, nor that he has one. That is, I believe that following out the necessary implications will eventually lead to contradiction. My evidence for this is quite weak, of course.
I can certainly talk about an even integer larger than two that is not expressible as the sum of two primes. But that doesn’t mean it’s logically possible. It might be, or it might not. Does a name without a full-fledged model count as conceiving, or not? Either way, it doesn’t appear to be significant evidence for.
I think they were stuck on the task of getting him to explain what that evidence was (and what evidence the access he does have gives him), which in turn was complicated by his insistence that he wasn’t referring to a psychological fact of ease of conceivability.
If it helps (which I don’t expect it does), I’ve been pursuing the trail of this (and related things) here.
Thus far his response seems to be that certain beliefs don’t require evidence (or, at least, don’t require “independent justification,” which may not be the same thing), and that his beliefs about zombies “cohere well” with his other beliefs (though I’m not sure which beliefs they cohere well with, or whether they coheres better with them than their negation does), and that there’s no reason to believe it’s false (though it’s not clear what role reasons for belief play in his decision-making in the first place).
So, the Bayesian translation of his position would seem to be that he has a high prior on zombies being conceivable. But of course, that in turn translates to “zombies are conceivable for reasons I’m not being explicit about”. Which is, naturally, the point: I’d like to know what he thinks he knows that I don’t.
Regarding coherence, and reasons to believe it’s false: the historical success of reductionism is a very good reason to believe it’s false, it seems to me. Despite Richard’s protestations, it really does appear to me that this is a case of undue reluctance on the part of philosophers to update their intuitions, or at least to let them be outweighed by something else.
Good point. I think my biggest frustration is that I can’t tell what point Richard Chappell is actually making so I can know whether I agree with it. It’s one thing to make a bad argument; it’s quite another to have a devastating argument that you keep secret.
You would probably have had more opportunity to draw it out of him if it weren’t for the karma system discouraging him from posting further on the topic. Remember that next time you’re tallying the positives and negatives of the karma system.
I don’t follow: he’s getting positive net karma from this discussion, just not as much as other posters. Very few of his comments, if any, actually went negative. In what sense in the karma system discouraging him?
Yes, slightly positive. Whether something encourages or discourages a person is a fact, not about the thing considered in itself, but about its effect on the person. The fact that the karma is slightly net positive is a fact about the thing considered in itself. The fact that he himself wrote:
But judging by the relatively low karma levels of my recent comments, going into further detail would not be of sufficient value to the LW community to be worth the time.
tells us something about its effect on the person.
Yes, he’s taking that as evidence that his posts are not valued. And indeed, like most posts that don’t (as komponisto and I noted) clearly articulate what their argument is, his posts aren’t valued (relative to others in the discussion). And he is correctly reading the evidence.
I was interpreting the concerns about “low karma being discouraging” as saying that if your karma goes negative, you actually get posting restrictions. But that’s not happening here; it’s just that Richard Chappell is being informed that his posts aren’t as valued as the others on this topic. Still positive value, mind you—just not as high as others.
In the absence of a karma system, he would either be less informed about his unhelpfulness in articulating his position, or be informed through other means. I don’t understand what your complaint is.
Yes, people who cannot articulate their position rigorously are going to have their feelings hurt at some level when people aren’t satisfied with their explanations. What does that have to do with the merits of the karma system?
Yes, people who cannot articulate their position rigorously are going to have their feelings hurt at some level when people aren’t satisfied with their explanations
You are speculating about possible reasons that people might have had for faling to award karma points.
What does that have to do with the merits of the karma system?
The position of your sentence implies that “that” refers to your speculation about the reasons that people might have had for withholding karma points. But my statement concerning the merits of the karma system had not referred to that speculation. Here is my statement again:
You would probably have had more opportunity to draw it out of him if it weren’t for the karma system discouraging him from posting further on the topic.
I am pointing out that had he not been discouraged as early as he was in the exchange, then you would probably have had more opportunity to draw him out. Do you dispute this? And then I wrote:
Remember that next time you’re tallying the positives and negatives of the karma system.
I have left it up to you to decide whether your loss of this opportunity is on the whole a positive or a negative.
You are speculating about possible reasons that people might have had for faling to award karma points.
Kind of. I was drawing on my observations about how the karma system is used. I’ve generally noticed (as have others) that people with outlier views do get modded up very highly, so long as they articulate their position clearly. For example: Mitchell Porter on QM, pjeby on PCT, lukeprog on certain matters of mainstream philosophy, Alicorn on deontology and (some) feminism, byrnema on theism, XiXiDu on LW groupthink.
Given that history, I felt safe in chalking up his “insufficiently” high karma to inscrutability rather than “He’s deviating from the party line—get him!” And you don’t get to ignore that factor (of controversial, well-articulated positions being voted up) by saying you “weren’t referring to that speculation”.
I am pointing out that had he not been discouraged as early as he was in the exchange, then you would probably have had more opportunity to draw him out. Do you dispute this?
My response is that, to the extent that convoluted, error-obscuring posting is discouraged, I’m perfectly fine with such discouragement, and I don’t want to change the karma system to be more favoring of that kind of posting.
If Richard couldn’t communicate his insight about “p-zombies being so easy to conceive of” on the first three tries, we’re probably not missing out on much by him being discouraged to post the fifty-third.
My most recent comment directed toward him was not saying, “No! Please don’t leave us! I love your deep insights!” Rather, it was saying, “Hold on—there’s an easy way to dig yourself out of this hole, as there has been the whole time. Just tell us why [...].”
Moreover, to the extent that the karma system doesn’t communicate to him what it did, that just means we’d have to do it another way, or fail to communicate it at all, neither of which is particularly appealing to me.
You’re still getting voted up on net, despite not explaining how, as you’ve claimed, the psychological fact of p-zombie plausibility is evidence for it (at least beyond references to long descriptions of your general beliefs).
Actually he seems to have denied this here, so at this point I’m stuck wondering what the evidence for zombie-conceivability is.
I believe he’s trying to draw a distinction between two potential sources of evidence:
The factual claim that people believe zombies are conceivable, and
The actual private act of conceiving of zombies.
Richard is saying that his justification for his belief that p-zombies are conceivable lies in his successful conception of p-zombies. So what licenses him to believe that he’s successfully conceived of zombies after all? His answer is that he has direct access to the contents of his conception, in the same way that he has access to the contents of his perception. You don’t need to ask, “How do I know I’m really seeing blue right now, and not red?” Your justification for your belief that you’re seeing blue just is your phenomenal act of noticing a real, bluish sensation. This justification is “direct” insofar as it comes directly from the sensation, and not via some intermediate process of reasoning which involve inferences (which can be valid or invalid) or premises (which can be true or false). Similarly, he thinks his justification for his belief that p-zombies are conceivable just is his p-zombie-ish conception.
A couple of things to note. One is that this evidence is wholly private. You don’t have direct access to his conceptions, just as you don’t have direct access to his perceptions. The only evidence Richard can give you is testimony. Moreover, he agrees that testimony of this sort is extremely weak evidence. But it’s not the evidence he claims that his belief rests on. The evidence that Richard appeals to can be evidence-for-Richard only.
Another thing is that the direct evidence he appeals to is not “neutral.” If p-zombies really are inconceivable, then he’s in fact not conceiving of p-zombies at all, and so his conception, whatever it was, was never evidence for the conceivability of p-zombies in the first place (in just the same way that seeing red isn’t evidence that you’re seeing blue). So there’s no easy way to set aside the question of whether Richard’s conception is evidence-for-him from the question of whether p-zombies are in general conceivable. The worthiness of Richard’s source of evidence is inextricable from the actual truth or falsehood of the claim in contention, viz., that p-zombies are conceivable. But he thinks this isn’t a problem.
If you want to move ahead in the discussion, then the following are your options:
You simply deny that Richard is in fact conceiving of p-zombies. This isn’t illegitimate, but it’s going to be a conversation-stopper, since he’ll insist that he does have them but that they’re private.
You accept that Richard can successfully conceive of p-zombies, but that this isn’t good evidence for their possibility (or that the very notion of “possibility” in this context is far too problematic to be useful).
You deny that we have direct access to anything, or that access to conceptions in particular is direct, or that one can ever have private knowledge. If you go this route, you have to be careful not to set yourself up for easy reductio. Specifically, you’d better not be led to deny the rationality of believing that you’re seeing blue when, e.g., you highlight this text.
I hope this helps clear things up. It pains me when people interpret their own confusion as evidence of some deep flaw in academic philosophy.
I was very deliberately ignoring this distinction: “people” includes Richard, even for Richard. The point is that Richard cannot simply trust his intuition; he has to weigh his apparent successful conception of zombies against the other evidence, such as the scientific success of reductionism, the findings from cognitive science that show how untrustworthy our intuitions are, and in particular specific arguments showing how we might fool ourselves into thinking zombies are conceivable.
This would appear to violate Aumann’s agreement theorem.
This is a confusion of map and territory. It is possible to be rationally uncertain about logical truths; and probability estimates (which include the extent to which a datum is evidence for a proposition) are determined by the information available to the agent, not the truth or falsehood of the proposition (otherwise, the only possible probability estimates would be 1 and 0). It may be rational to assign a probability of 75% to the truth of the Riemann Hypothesis given the information we currently have, even if the Riemann Hypothesis turns out to be false (we may have misleading information).
My position could be described by any of those three options—in other words, they seem to differ only in the interpretation of terms like “conceivable”, and don’t properly hug the query.
I must do so to the extent I believe zombies are in fact inconceivable. But I don’t see why it should be a conversation-stopper: if Richard is right and I am wrong, Richard should be able to offer evidence that he is unusually capable of determining whether his apparent conception is in fact successful (if he can’t, then he should be doubting his own successful conception himself).
I can assent to this if “conceive” is interpreted in such a way that it is possible to conceive of something that is logically impossible (i.e. if it is granted that I can conceive of Fermat’s Last Theorem being false).
“Private knowledge” in this sense is ruled out by Aumann, as far as I can tell. As for “direct access”, well, that was Eliezer’s original point, which I agree with: all knowledge is subject to some uncertainty due to the flaws in human psychology, and in particular all knowledge claims are subject to being undermined by arguments showing how the brain could generate them independently of the truth of the proposition in question. (In other words, the “genetic fallacy” is no fallacy, at least not necessarily.)
I think it’s overwhelmingly likely that I’m seeing blue, but I could turn out to be mistaken.
I don’t think Richard said anything to dispute this. He never said that his direct access to the conceivability of zombies renders his justification indefeasible.
This is not a case in which you share common priors, so the theorem doesn’t apply. You don’t have, and in fact can never have, the information Richard (thinks he) has. Aumann’s theorem does not imply that everyone is capable of accessing the same evidence.
That’s certainly true, but I can’t see its relevance to what I said. In part because of some of the very reasons you name here, we can be mistaken about whether an observation O confirms a hypothesis H or not, hence whether an observation is evidence for a hypothesis or not. If the hypothesis in question concerns whether O is in fact even observable, and my evidence for ~H is that I’ve made O, then someone who strongly disagrees with me about H will conclude that I made some other observation O’ and have been mistaking it for O. And since the observability of O’ doesn’t have any evidentiary bearing on H, he’ll say, my observation wasn’t actually the evidence that I took it to be. That’s the point I was trying to illustrate: we may not be able to agree about whether my purported evidence should confirm H if we antecedently disagree about H. [Edited this sentence to make it clearer.]
I don’t really see what this could mean.
Richard didn’t state that his evidence for the conceivability of zombies is absolutely incontrovertible. He just said he had direct access to it, i.e., he has extremely strong evidence for it that doesn’t follow from some intermediary inference.
Why not?
Postulating uncommon priors is not to be done lightly: it imposes specific constraints on beliefs about priors. See Robin Hanson’s paper “Uncommon Priors Require Origin Disputes”.
In any case, what I want to know is how I should update my beliefs in light of Richard’s statements. Does he have information about the conceivability of zombies that I don’t, or is he just making a mistake?
In such a dispute, there is some observation O″ that (both parties can agree) you made, which is equal to (or implies) either O or O’, and the dispute is about which one of these it is the same as (or implies). But since O implies H and O’ doesn’t, the dispute reduces to the question of whether O″ implies H or not, and so you may as well discuss that directly.
In the case at hand, O is “Richard has conceived of zombies”, O’ is “Richard mistakenly believes he has conceived of zombies”, and O″ is “Richard believes he has conceived of zombies”. But in the discussion so far, Richard has been resisting attempts to switch from discussing O (the subject of dispute) to discussing O″, which obviously prevents the discussion from proceeding.
Because, again, you do not have access to the same evidence (if Richard is right about the conceivability of zombies, that is!). Robin’s paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree. To reiterate, Richard (as I understand him) believes that you and he do not share the same information.
Well, you shouldn’t take his testimony of zombie conceivability as very good evidence of zombie conceivability. In that sense, you don’t have to sweat this conversation very much at all. This is less a debate about the conceivability of zombies and more a debate about the various dialectical positions of the parties involved in the conceivability debate. Do people who feel they can “robustly” conceive of p-zombies necessarily have to found their beliefs on publicly evaluable, “third-person” evidence? That seems to me the cornerstone of this particular discussion, rather than: Is the evidence for the conceivability of p-zombies any good?
Yes, that’s the “neutral” view of evidence Richard professed to deny.
The actual values of O and O’ at hand are “That one particular mental event which occurred in Richard’s mind at time t [when he was trying to conceive of zombies] was a conception of zombies,” and “That one particular mental event which occurred in Richard’s mind at time t was a conception of something other than zombies, or a non-conception.” The truth-value value of the O″ you provide has little bearing on either of these.
EDIT: Here’s a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridiculous and point to all sorts of icky human biases tainting our judgment. How do you update your belief that you’re experiencing red?
EDIT 2: Looking over this once again, I think I should be less glib in my first paragraph. Note that I’m denying that you share common priors, but then appealing to different evidence that you have to explain why this can be rational. If the difference in priors is a result of the difference in evidence, aren’t they just posteriors?
The answer I personally would give is that there are different kinds of evidence. Posteriors are the result of conditionalizing on propositional evidence, such as “Snow is white.” But not all evidence is propositional. In particular, many of our introspective beliefs are justified (when they are justified at all) by the direct access we have to our own experiences. Experiences are not propositions! You cannot conditionalize on an experience. You can conditionalize on a sentence like “I am having experience E,” of course, but the evidence for that sentence is going to come from E itself, not another proposition.
This is not correct. Even the original Aumann theorem only assumes that the Bayesians have (besides common priors) common knowledge of each other’s probability estimates—not that they share all the same information! (In fact, if they have common priors and the same information, then their posteriors are trivially equal.)
Robin’s paper imposes restrictions on being able to postulate uncommon priors as a way of escaping Aumann’s theorem: if you want to assume uncommon priors, certain consequences follow. (Roughly speaking, if Richard and I have differing priors, then we must also disagree about the origin of our priors.)
In any event, you do get closer to what I regard as the point here:
Another term for “conditionalize” is “update”. Why can’t you update on an experience?
The sense I get is that you’re not wanting to apply the Bayesian model of belief to “experiences”. But if our “experiences” affect our beliefs, then I see no reason not to.
In these terms, O″ is simply “that one particular mental event occurred in Richard’s mind”—so again, the question is what the occurrence of that mental event implies, and we should be able to bypass the dispute about whether to classify it as O or O’ by analyzing its implications directly. (The truth-value of O″ isn’t a subject of dispute; in fact O″ is chosen that way.)
It goes down, since the tribespeople would be more likely to say that if there is no visual experience than if there is. Of course, the amount it goes down by will depend on my other information (in particular, if I know they’re congenitally blind, that significantly weakens this evidence).
I would categorize my position as somewhere between 1 and 2, depending on what you mean by “conceiving”. I think he has a name attached to some properties associated with p-zombies and a world in which they exist, but this doesn’t mean a coherent model of such a world is possible, nor that he has one. That is, I believe that following out the necessary implications will eventually lead to contradiction. My evidence for this is quite weak, of course.
I can certainly talk about an even integer larger than two that is not expressible as the sum of two primes. But that doesn’t mean it’s logically possible. It might be, or it might not. Does a name without a full-fledged model count as conceiving, or not? Either way, it doesn’t appear to be significant evidence for.
I think the critics of Richard Chappell here are taking route 2 in your categorization.
komponisto and TheOtherDave appear to have been taking route 3. (challenging Richard’s purported access to evidence for zombie conceivaiblity).
I think they were stuck on the task of getting him to explain what that evidence was (and what evidence the access he does have gives him), which in turn was complicated by his insistence that he wasn’t referring to a psychological fact of ease of conceivability.
If it helps (which I don’t expect it does), I’ve been pursuing the trail of this (and related things) here.
Thus far his response seems to be that certain beliefs don’t require evidence (or, at least, don’t require “independent justification,” which may not be the same thing), and that his beliefs about zombies “cohere well” with his other beliefs (though I’m not sure which beliefs they cohere well with, or whether they coheres better with them than their negation does), and that there’s no reason to believe it’s false (though it’s not clear what role reasons for belief play in his decision-making in the first place).
So, the Bayesian translation of his position would seem to be that he has a high prior on zombies being conceivable. But of course, that in turn translates to “zombies are conceivable for reasons I’m not being explicit about”. Which is, naturally, the point: I’d like to know what he thinks he knows that I don’t.
Regarding coherence, and reasons to believe it’s false: the historical success of reductionism is a very good reason to believe it’s false, it seems to me. Despite Richard’s protestations, it really does appear to me that this is a case of undue reluctance on the part of philosophers to update their intuitions, or at least to let them be outweighed by something else.
Good point. I think my biggest frustration is that I can’t tell what point Richard Chappell is actually making so I can know whether I agree with it. It’s one thing to make a bad argument; it’s quite another to have a devastating argument that you keep secret.
You would probably have had more opportunity to draw it out of him if it weren’t for the karma system discouraging him from posting further on the topic. Remember that next time you’re tallying the positives and negatives of the karma system.
I don’t follow: he’s getting positive net karma from this discussion, just not as much as other posters. Very few of his comments, if any, actually went negative. In what sense in the karma system discouraging him?
Yes, slightly positive. Whether something encourages or discourages a person is a fact, not about the thing considered in itself, but about its effect on the person. The fact that the karma is slightly net positive is a fact about the thing considered in itself. The fact that he himself wrote:
tells us something about its effect on the person.
Yes, he’s taking that as evidence that his posts are not valued. And indeed, like most posts that don’t (as komponisto and I noted) clearly articulate what their argument is, his posts aren’t valued (relative to others in the discussion). And he is correctly reading the evidence.
I was interpreting the concerns about “low karma being discouraging” as saying that if your karma goes negative, you actually get posting restrictions. But that’s not happening here; it’s just that Richard Chappell is being informed that his posts aren’t as valued as the others on this topic. Still positive value, mind you—just not as high as others.
In the absence of a karma system, he would either be less informed about his unhelpfulness in articulating his position, or be informed through other means. I don’t understand what your complaint is.
Yes, people who cannot articulate their position rigorously are going to have their feelings hurt at some level when people aren’t satisfied with their explanations. What does that have to do with the merits of the karma system?
You are speculating about possible reasons that people might have had for faling to award karma points.
The position of your sentence implies that “that” refers to your speculation about the reasons that people might have had for withholding karma points. But my statement concerning the merits of the karma system had not referred to that speculation. Here is my statement again:
I am pointing out that had he not been discouraged as early as he was in the exchange, then you would probably have had more opportunity to draw him out. Do you dispute this? And then I wrote:
I have left it up to you to decide whether your loss of this opportunity is on the whole a positive or a negative.
Kind of. I was drawing on my observations about how the karma system is used. I’ve generally noticed (as have others) that people with outlier views do get modded up very highly, so long as they articulate their position clearly. For example: Mitchell Porter on QM, pjeby on PCT, lukeprog on certain matters of mainstream philosophy, Alicorn on deontology and (some) feminism, byrnema on theism, XiXiDu on LW groupthink.
Given that history, I felt safe in chalking up his “insufficiently” high karma to inscrutability rather than “He’s deviating from the party line—get him!” And you don’t get to ignore that factor (of controversial, well-articulated positions being voted up) by saying you “weren’t referring to that speculation”.
My response is that, to the extent that convoluted, error-obscuring posting is discouraged, I’m perfectly fine with such discouragement, and I don’t want to change the karma system to be more favoring of that kind of posting.
If Richard couldn’t communicate his insight about “p-zombies being so easy to conceive of” on the first three tries, we’re probably not missing out on much by him being discouraged to post the fifty-third.
My most recent comment directed toward him was not saying, “No! Please don’t leave us! I love your deep insights!” Rather, it was saying, “Hold on—there’s an easy way to dig yourself out of this hole, as there has been the whole time. Just tell us why [...].”
Moreover, to the extent that the karma system doesn’t communicate to him what it did, that just means we’d have to do it another way, or fail to communicate it at all, neither of which is particularly appealing to me.