So sharing evidence the normal way shouldn’t be necessary. Asking someone “what’s the evidence for that?” implicitly says, “I don’t trust your rationality enough to take your word for it.”
There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren’t just moving closer to each other’s beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.
Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent’s information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you’re making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.
In other words, when I say “what’s the evidence for that?”, it’s not that I don’t trust your rationality (although of course I don’t trust your rationality either), but I just can’t deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.
when I say “what’s the evidence for that?”, it’s not that I don’t trust your rationality (although of course I don’t trust your rationality either), but I just can’t deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.
Yes. There are reasons to ask for evidence that have nothing to do with disrespect.
Even assuming that all parties are perfectly rational and that any disagreement must stem from differing information, it is not always obvious which party has better relevant information. Sharing evidence can clarify whether you know something that I don’t, or vice versa.
Information is a good thing; it refines one’s model of the world. Even if you are correct and I am wrong, asking for evidence has the potential to add your information to my model of the world. This is preferable to just taking your word for the conclusion, because that information may well be relevant to more decisions than the topic at hand.
There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations).
It seems like doubting each other’s rationality is a perfectly fine explanation. I don’t think most people around here are perfectly rational, nor that they think I’m perfectly rational, and definitely not that they all think that I think they are perfectly rational. So I doubt that they’ve updated enough on the fact that my views haven’t converged towards theirs, and they may be right that I haven’t updated enough on the fact that their views haven’t converged towards mine.
In practice we live in a world where many pairs of people disagree, and you have to disagree with a lot of people. I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.
There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations).
The point I wanted to make was that AFAIK there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have, even if they trust each other to be as rational as humanly possible. The result by Scott Aaronson may be of theoretical interest (and maybe even of practical use by future AIs that can perform exact computations with the information in their minds), but seem to have no relevance to humans faced with real-world (i.e., as opposed to toy examples) disagreements.
I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.
there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have
Huh? There is currently no practical method for two humans to reliably reach agreement on some topic, full stop. Exchanging all evidence might help, but given that we are talking about humans and not straw Vulcans, it is still not a reliable method.
There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren’t just moving closer to each other’s beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.
Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent’s information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you’re making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.
I won’t try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong. It seems like two people trading probability estimates shouldn’t need to deduce exactly what the other has observed, they just need to make inferences along the lines of, “wow, she wasn’t swayed as much as I expected by me telling her my opinion, she must think she has some pretty good evidence.” At least that’s the inference you would make if you both knew you trust each other’s rationality. More realistically, of course, the correct inference is usually “she wasn’t swayed by me telling her my opinion, she doesn’t just trust me to be rational.”
Consider what would have to happen for two rationalists who knowingly trust each other’s rationality to have a persistent disagreement. Because of conservation of expected evidence, Alice has to think her probability estimate would on average remain the same after hearing Bob’s evidence, and Bob must think the same about hearing Alice’s evidence. That seems to suggest they both must think they have better, more relevant evidence to the question at hand. And might be perfectly reasonable for them to think that at first.
But after several rounds of sharing their probability estimates and seeing the other not budge, Alice will have to realize Bob thinks he’s better informed about the topic than she is. And Bob will have to realize the same about Alice. And if they both trust each other’s rationality, Alice will have to think, “I thought I was better informed than Bob about this, but it looks like Bob thinks he’s the one who’s better informed, so maybe I’m wrong about being better informed.” And Bob will have to have the parallel thought. Eventually, they should converge.
I won’t try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong.
Wei Dai’s description is correct, see here for an example where the final estimate is outside the range of the initial two. And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.
And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.
Wonder if a list of such things can be constructed. Algorithmic information theory is an example where Eliezer drew the wrong implications from the math and unfortunately much of LessWrong inherited that. Group selection (multi-level selection) might be another example, but less clear cut, as that requires computational modeling and not just interpretation of mathematics. I’m sure there are more and better examples.
In other words, when I say “what’s the evidence for that?”, it’s not that I don’t trust your rationality (although of course I don’t trust your rationality either), but I just can’t deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.
The argument can even be made more general than that: under many circumstances, it is cheaper for us to discuss the evidence we have than it is for us to try to deduce it from our respective probability estimates.
(although of course I don’t trust your rationality either)
I’m not sure this qualifier is necessary. Your argument is sufficient to establish your point (which I agree with) even if you do trust the other’s rationality.
Yes. But it entirely depends on how the request for supportive references is phrased.
Good:
Interesting point. I’m not entirely clear how you arrived at that position. I’d like to look up some detail questions on that. Could you provide references I might look at?
Bad:
That argument makes no sense. What references do you have to support such a ridiculous claim?
Interesting point. I’m not entirely clear how you arrived at that position. I’d like to look up some detail questions on that. Could you provide references I might look at?
sort of implies you’re updating towards the other’s position. If you not only disagree but are totally unswayed by hearing the other person’s opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).
But shouldn’t you always update toward the others position? And if the argument isn’t convincing you can truthfully tell so that you updated only slightly.
But shouldn’t you always update toward the others position?
That’s not how Aumann’s theorem works. For example, if Alice mildly believe X and Bob strongly believes X, it may be that Alice has weak evidence for X, and Bob has much stronger independent evidence for X. Thus, after exchanging evidence they’ll both believe X even more strongly than Bob did initially.
One related use case is when everyone in a meeting prefers policy X to policy Y, although each are a little concerned about one possible problem. Going around the room and asking everyone how likely they think X is to succeed produces estimates of 80%, so, having achieved consensus, they adopt X.
But, if people had mentioned their particular reservations, they would have noticed they were all different, and that, once they’d been acknowledged, Y was preferred.
“Interesting point. I’m not entirely clear how you arrived at that position. I’d like to look up some detail questions on that. Could you provide references I might look at?”
doesn’t make clear that the other holds another position and that the reply may just address the validity of the evidence.
But even then shouldn’t you see it at least as weak evidence and thus believe X at least a bit more strongly?
I disagree with this, and explained why in Probability Space & Aumann Agreement . To quote the relevant parts:
In other words, when I say “what’s the evidence for that?”, it’s not that I don’t trust your rationality (although of course I don’t trust your rationality either), but I just can’t deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.
Yes. There are reasons to ask for evidence that have nothing to do with disrespect.
Even assuming that all parties are perfectly rational and that any disagreement must stem from differing information, it is not always obvious which party has better relevant information. Sharing evidence can clarify whether you know something that I don’t, or vice versa.
Information is a good thing; it refines one’s model of the world. Even if you are correct and I am wrong, asking for evidence has the potential to add your information to my model of the world. This is preferable to just taking your word for the conclusion, because that information may well be relevant to more decisions than the topic at hand.
There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations).
It seems like doubting each other’s rationality is a perfectly fine explanation. I don’t think most people around here are perfectly rational, nor that they think I’m perfectly rational, and definitely not that they all think that I think they are perfectly rational. So I doubt that they’ve updated enough on the fact that my views haven’t converged towards theirs, and they may be right that I haven’t updated enough on the fact that their views haven’t converged towards mine.
In practice we live in a world where many pairs of people disagree, and you have to disagree with a lot of people. I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.
The point I wanted to make was that AFAIK there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have, even if they trust each other to be as rational as humanly possible. The result by Scott Aaronson may be of theoretical interest (and maybe even of practical use by future AIs that can perform exact computations with the information in their minds), but seem to have no relevance to humans faced with real-world (i.e., as opposed to toy examples) disagreements.
I don’t understand this. Can you expand?
Huh? There is currently no practical method for two humans to reliably reach agreement on some topic, full stop. Exchanging all evidence might help, but given that we are talking about humans and not straw Vulcans, it is still not a reliable method.
I won’t try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong. It seems like two people trading probability estimates shouldn’t need to deduce exactly what the other has observed, they just need to make inferences along the lines of, “wow, she wasn’t swayed as much as I expected by me telling her my opinion, she must think she has some pretty good evidence.” At least that’s the inference you would make if you both knew you trust each other’s rationality. More realistically, of course, the correct inference is usually “she wasn’t swayed by me telling her my opinion, she doesn’t just trust me to be rational.”
Consider what would have to happen for two rationalists who knowingly trust each other’s rationality to have a persistent disagreement. Because of conservation of expected evidence, Alice has to think her probability estimate would on average remain the same after hearing Bob’s evidence, and Bob must think the same about hearing Alice’s evidence. That seems to suggest they both must think they have better, more relevant evidence to the question at hand. And might be perfectly reasonable for them to think that at first.
But after several rounds of sharing their probability estimates and seeing the other not budge, Alice will have to realize Bob thinks he’s better informed about the topic than she is. And Bob will have to realize the same about Alice. And if they both trust each other’s rationality, Alice will have to think, “I thought I was better informed than Bob about this, but it looks like Bob thinks he’s the one who’s better informed, so maybe I’m wrong about being better informed.” And Bob will have to have the parallel thought. Eventually, they should converge.
Wei Dai’s description is correct, see here for an example where the final estimate is outside the range of the initial two. And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.
Wonder if a list of such things can be constructed. Algorithmic information theory is an example where Eliezer drew the wrong implications from the math and unfortunately much of LessWrong inherited that. Group selection (multi-level selection) might be another example, but less clear cut, as that requires computational modeling and not just interpretation of mathematics. I’m sure there are more and better examples.
The argument can even be made more general than that: under many circumstances, it is cheaper for us to discuss the evidence we have than it is for us to try to deduce it from our respective probability estimates.
I’m not sure this qualifier is necessary. Your argument is sufficient to establish your point (which I agree with) even if you do trust the other’s rationality.
Personally, I am entirely in favor of the “I don’t trust your rationality either” qualifier.
Is that because you think it’s necessary to Wei_Dai’s argument, or just because you would like people to be up front about what they think?
Yes. But it entirely depends on how the request for supportive references is phrased.
Good:
Bad:
The neutral
leaves the interpretation of the attitude to the reader/addressee and is bound to be misinterpreted (people misinterpreting tone or meaning of email).
Saying
sort of implies you’re updating towards the other’s position. If you not only disagree but are totally unswayed by hearing the other person’s opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).
But shouldn’t you always update toward the others position? And if the argument isn’t convincing you can truthfully tell so that you updated only slightly.
That’s not how Aumann’s theorem works. For example, if Alice mildly believe X and Bob strongly believes X, it may be that Alice has weak evidence for X, and Bob has much stronger independent evidence for X. Thus, after exchanging evidence they’ll both believe X even more strongly than Bob did initially.
Yup!
One related use case is when everyone in a meeting prefers policy X to policy Y, although each are a little concerned about one possible problem. Going around the room and asking everyone how likely they think X is to succeed produces estimates of 80%, so, having achieved consensus, they adopt X.
But, if people had mentioned their particular reservations, they would have noticed they were all different, and that, once they’d been acknowledged, Y was preferred.
Even if they both equally strongly believe X, it makes sense for them to talk whether they both used the same evidence or different evidence.
Obligatory link.
Of course.
I agree that
doesn’t make clear that the other holds another position and that the reply may just address the validity of the evidence.
But even then shouldn’t you see it at least as weak evidence and thus believe X at least a bit more strongly?