This is not a case in which you share common priors
Why not?
Postulating uncommon priors is not to be done lightly: it imposes specific constraints on beliefs about priors. See Robin Hanson’s paper “Uncommon Priors Require Origin Disputes”.
In any case, what I want to know is how I should update my beliefs in light of Richard’s statements. Does he have information about the conceivability of zombies that I don’t, or is he just making a mistake?
If the hypothesis in question concerns whether O is in fact even observable, and my evidence for ~H is that I’ve made O, then someone who strongly disagrees with me about H will conclude that I made some other observation O’ and have been mistaking it for O. And since the observability of O’ doesn’t have any evidentiary bearing on H, he’ll say, my observation wasn’t actually the evidence that I took it to be. That’s the point I was trying to illustrate: we may not be able to agree about whether my purported evidence should confirm H if we antecedently disagree about H.
In such a dispute, there is some observation O″ that (both parties can agree) you made, which is equal to (or implies) either O or O’, and the dispute is about which one of these it is the same as (or implies). But since O implies H and O’ doesn’t, the dispute reduces to the question of whether O″ implies H or not, and so you may as well discuss that directly.
In the case at hand, O is “Richard has conceived of zombies”, O’ is “Richard mistakenly believes he has conceived of zombies”, and O″ is “Richard believes he has conceived of zombies”. But in the discussion so far, Richard has been resisting attempts to switch from discussing O (the subject of dispute) to discussing O″, which obviously prevents the discussion from proceeding.
Because, again, you do not have access to the same evidence (if Richard is right about the conceivability of zombies, that is!). Robin’s paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree. To reiterate, Richard (as I understand him) believes that you and he do not share the same information.
In any case, what I want to know is how I should update my beliefs in light of Richard’s statements.
Well, you shouldn’t take his testimony of zombie conceivability as very good evidence of zombie conceivability. In that sense, you don’t have to sweat this conversation very much at all. This is less a debate about the conceivability of zombies and more a debate about the various dialectical positions of the parties involved in the conceivability debate. Do people who feel they can “robustly” conceive of p-zombies necessarily have to found their beliefs on publicly evaluable, “third-person” evidence? That seems to me the cornerstone of this particular discussion, rather than: Is the evidence for the conceivability of p-zombies any good?
In such a dispute, there is some observation O″ that (both parties can agree) you made, which is equal to (or implies) either O or O’, and the dispute is about which one of these it is the same as (or implies). But since O implies H and O’ doesn’t, the dispute reduces to the question of whether O″ implies H or not, and so you may as well discuss that directly.
Yes, that’s the “neutral” view of evidence Richard professed to deny.
The actual values of O and O’ at hand are “That one particular mental event which occurred in Richard’s mind at time t [when he was trying to conceive of zombies] was a conception of zombies,” and “That one particular mental event which occurred in Richard’s mind at time t was a conception of something other than zombies, or a non-conception.” The truth-value value of the O″ you provide has little bearing on either of these.
EDIT: Here’s a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridiculous and point to all sorts of icky human biases tainting our judgment. How do you update your belief that you’re experiencing red?
EDIT 2: Looking over this once again, I think I should be less glib in my first paragraph. Note that I’m denying that you share common priors, but then appealing to different evidence that you have to explain why this can be rational. If the difference in priors is a result of the difference in evidence, aren’t they just posteriors?
The answer I personally would give is that there are different kinds of evidence. Posteriors are the result of conditionalizing on propositional evidence, such as “Snow is white.” But not all evidence is propositional. In particular, many of our introspective beliefs are justified (when they are justified at all) by the direct access we have to our own experiences. Experiences are not propositions! You cannot conditionalize on an experience. You can conditionalize on a sentence like “I am having experience E,” of course, but the evidence for that sentence is going to come from E itself, not another proposition.
Robin’s paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree.
This is not correct. Even the original Aumann theorem only assumes that the Bayesians have (besides common priors) common knowledge of each other’s probability estimates—not that they share all the same information! (In fact, if they have common priors and the same information, then their posteriors are trivially equal.)
Robin’s paper imposes restrictions on being able to postulate uncommon priors as a way of escaping Aumann’s theorem: if you want to assume uncommon priors, certain consequences follow. (Roughly speaking, if Richard and I have differing priors, then we must also disagree about the origin of our priors.)
In any event, you do get closer to what I regard as the point here:
Experiences are not propositions! You cannot conditionalize on an experience.
Another term for “conditionalize” is “update”. Why can’t you update on an experience?
The sense I get is that you’re not wanting to apply the Bayesian model of belief to “experiences”. But if our “experiences” affect our beliefs, then I see no reason not to.
The actual values of O and O’ at hand are “That one particular mental event which occurred in Richard’s mind at time t [when he was trying to conceive of zombies] was a conception of zombies,” and “That one particular mental event which occurred in Richard’s mind at time t was a conception of something other than zombies, or a non-conception.” The truth-value value of the O″ you provide has little bearing on either of these.
In these terms, O″ is simply “that one particular mental event occurred in Richard’s mind”—so again, the question is what the occurrence of that mental event implies, and we should be able to bypass the dispute about whether to classify it as O or O’ by analyzing its implications directly. (The truth-value of O″ isn’t a subject of dispute; in fact O″ is chosen that way.)
Here’s a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridiculous and point to all sorts of icky human biases tainting our judgment. How do you update your belief that you’re experiencing red?
It goes down, since the tribespeople would be more likely to say that if there is no visual experience than if there is. Of course, the amount it goes down by will depend on my other information (in particular, if I know they’re congenitally blind, that significantly weakens this evidence).
Why not?
Postulating uncommon priors is not to be done lightly: it imposes specific constraints on beliefs about priors. See Robin Hanson’s paper “Uncommon Priors Require Origin Disputes”.
In any case, what I want to know is how I should update my beliefs in light of Richard’s statements. Does he have information about the conceivability of zombies that I don’t, or is he just making a mistake?
In such a dispute, there is some observation O″ that (both parties can agree) you made, which is equal to (or implies) either O or O’, and the dispute is about which one of these it is the same as (or implies). But since O implies H and O’ doesn’t, the dispute reduces to the question of whether O″ implies H or not, and so you may as well discuss that directly.
In the case at hand, O is “Richard has conceived of zombies”, O’ is “Richard mistakenly believes he has conceived of zombies”, and O″ is “Richard believes he has conceived of zombies”. But in the discussion so far, Richard has been resisting attempts to switch from discussing O (the subject of dispute) to discussing O″, which obviously prevents the discussion from proceeding.
Because, again, you do not have access to the same evidence (if Richard is right about the conceivability of zombies, that is!). Robin’s paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree. To reiterate, Richard (as I understand him) believes that you and he do not share the same information.
Well, you shouldn’t take his testimony of zombie conceivability as very good evidence of zombie conceivability. In that sense, you don’t have to sweat this conversation very much at all. This is less a debate about the conceivability of zombies and more a debate about the various dialectical positions of the parties involved in the conceivability debate. Do people who feel they can “robustly” conceive of p-zombies necessarily have to found their beliefs on publicly evaluable, “third-person” evidence? That seems to me the cornerstone of this particular discussion, rather than: Is the evidence for the conceivability of p-zombies any good?
Yes, that’s the “neutral” view of evidence Richard professed to deny.
The actual values of O and O’ at hand are “That one particular mental event which occurred in Richard’s mind at time t [when he was trying to conceive of zombies] was a conception of zombies,” and “That one particular mental event which occurred in Richard’s mind at time t was a conception of something other than zombies, or a non-conception.” The truth-value value of the O″ you provide has little bearing on either of these.
EDIT: Here’s a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridiculous and point to all sorts of icky human biases tainting our judgment. How do you update your belief that you’re experiencing red?
EDIT 2: Looking over this once again, I think I should be less glib in my first paragraph. Note that I’m denying that you share common priors, but then appealing to different evidence that you have to explain why this can be rational. If the difference in priors is a result of the difference in evidence, aren’t they just posteriors?
The answer I personally would give is that there are different kinds of evidence. Posteriors are the result of conditionalizing on propositional evidence, such as “Snow is white.” But not all evidence is propositional. In particular, many of our introspective beliefs are justified (when they are justified at all) by the direct access we have to our own experiences. Experiences are not propositions! You cannot conditionalize on an experience. You can conditionalize on a sentence like “I am having experience E,” of course, but the evidence for that sentence is going to come from E itself, not another proposition.
This is not correct. Even the original Aumann theorem only assumes that the Bayesians have (besides common priors) common knowledge of each other’s probability estimates—not that they share all the same information! (In fact, if they have common priors and the same information, then their posteriors are trivially equal.)
Robin’s paper imposes restrictions on being able to postulate uncommon priors as a way of escaping Aumann’s theorem: if you want to assume uncommon priors, certain consequences follow. (Roughly speaking, if Richard and I have differing priors, then we must also disagree about the origin of our priors.)
In any event, you do get closer to what I regard as the point here:
Another term for “conditionalize” is “update”. Why can’t you update on an experience?
The sense I get is that you’re not wanting to apply the Bayesian model of belief to “experiences”. But if our “experiences” affect our beliefs, then I see no reason not to.
In these terms, O″ is simply “that one particular mental event occurred in Richard’s mind”—so again, the question is what the occurrence of that mental event implies, and we should be able to bypass the dispute about whether to classify it as O or O’ by analyzing its implications directly. (The truth-value of O″ isn’t a subject of dispute; in fact O″ is chosen that way.)
It goes down, since the tribespeople would be more likely to say that if there is no visual experience than if there is. Of course, the amount it goes down by will depend on my other information (in particular, if I know they’re congenitally blind, that significantly weakens this evidence).