I said, “So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong.”
He said, “Well, um, I guess we may have to agree to disagree on this.”
I said: “No, we can’t, actually. There’s a theorem of rationality called Aumann’s Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.”
One could discuss whether Eliezer was right to appeal to AAT in a conversation like this, given that neither he nor his conversational partner are perfect Bayesians. I don’t think it’s entirely unfair to say that humans are flawed to the extent that we fail to live up to the ideal Bayesian standard (even if such a standard is unobtainable), so it’s not clear to me why it would be misleading to say that if two people have common knowledge of a disagreement, at least one (or both) of them are “doing something wrong”.
Nonetheless, I agree that it would be an improvement to at least be more clear about what Aumann’s Agreement Theorem actually says. So I will amend that part of the text.
Yeah; it’s not open/shut. I guess I’d say in the current phrasing, the “but Aumann’s Agreement Theorem shows that if two people disagree, at least one is doing something wrong.” is suggesting implications but not actually saying anything interesting—at least one of them is doing something wrong by this standard whether or not they agree. I think adding some more context to make people less suspicious they’re getting Eulered (http://slatestarcodex.com/2014/08/10/getting-eulered/) would be good.
I think this flaw is basically in the original article as well, though, so it’s also a struggle between accurately representing the source and adding editorial correction.
Nitpicks aside, want to say again that this is really great; thank you!
Thanks for the feedback.
Here’s the quote from the original article:
One could discuss whether Eliezer was right to appeal to AAT in a conversation like this, given that neither he nor his conversational partner are perfect Bayesians. I don’t think it’s entirely unfair to say that humans are flawed to the extent that we fail to live up to the ideal Bayesian standard (even if such a standard is unobtainable), so it’s not clear to me why it would be misleading to say that if two people have common knowledge of a disagreement, at least one (or both) of them are “doing something wrong”.
Nonetheless, I agree that it would be an improvement to at least be more clear about what Aumann’s Agreement Theorem actually says. So I will amend that part of the text.
Yeah; it’s not open/shut. I guess I’d say in the current phrasing, the “but Aumann’s Agreement Theorem shows that if two people disagree, at least one is doing something wrong.” is suggesting implications but not actually saying anything interesting—at least one of them is doing something wrong by this standard whether or not they agree. I think adding some more context to make people less suspicious they’re getting Eulered (http://slatestarcodex.com/2014/08/10/getting-eulered/) would be good.
I think this flaw is basically in the original article as well, though, so it’s also a struggle between accurately representing the source and adding editorial correction.
Nitpicks aside, want to say again that this is really great; thank you!