This is completely awesome, thanks for doing this. This is something I can imagine actually sending to semi-interested friends.
Direct messaging seems to be wonky at the moment, so I’ll put a suggested correction here: for 2.4, Aumann’s Agreement Theorem does not show that if two people disagree, at least one of them is doing something wrong. From wikipedia: ” if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal. ” This could fail at multiple steps, off the top of my head:
The humans might not be (mathematically pure) Bayesian rationalists (and in fact they’re not.)
The humans might not have common priors (even if they satisfied 1.)
The humans might not have common knowledge of their posterior probabilities; a human saying words is a signal, not direct knowledge, so them telling you their posterior probabilities may not do the trick (and they might not know them.)
You could say failing to satisfy 1-3 means that at least one of them is “doing something wrong”, but I think it’s a misleading stretch—failing to be normatively matched up to an arbitrary unobtainable mathematical structure is not what we usually call wrong. It stuck out to me as something that would put off readers with a bullshit detector, so I think it’d be worth fixing.
I said, “So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong.”
He said, “Well, um, I guess we may have to agree to disagree on this.”
I said: “No, we can’t, actually. There’s a theorem of rationality called Aumann’s Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.”
One could discuss whether Eliezer was right to appeal to AAT in a conversation like this, given that neither he nor his conversational partner are perfect Bayesians. I don’t think it’s entirely unfair to say that humans are flawed to the extent that we fail to live up to the ideal Bayesian standard (even if such a standard is unobtainable), so it’s not clear to me why it would be misleading to say that if two people have common knowledge of a disagreement, at least one (or both) of them are “doing something wrong”.
Nonetheless, I agree that it would be an improvement to at least be more clear about what Aumann’s Agreement Theorem actually says. So I will amend that part of the text.
Yeah; it’s not open/shut. I guess I’d say in the current phrasing, the “but Aumann’s Agreement Theorem shows that if two people disagree, at least one is doing something wrong.” is suggesting implications but not actually saying anything interesting—at least one of them is doing something wrong by this standard whether or not they agree. I think adding some more context to make people less suspicious they’re getting Eulered (http://slatestarcodex.com/2014/08/10/getting-eulered/) would be good.
I think this flaw is basically in the original article as well, though, so it’s also a struggle between accurately representing the source and adding editorial correction.
Nitpicks aside, want to say again that this is really great; thank you!
This is completely awesome, thanks for doing this. This is something I can imagine actually sending to semi-interested friends.
Direct messaging seems to be wonky at the moment, so I’ll put a suggested correction here: for 2.4, Aumann’s Agreement Theorem does not show that if two people disagree, at least one of them is doing something wrong. From wikipedia: ” if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal. ” This could fail at multiple steps, off the top of my head:
The humans might not be (mathematically pure) Bayesian rationalists (and in fact they’re not.)
The humans might not have common priors (even if they satisfied 1.)
The humans might not have common knowledge of their posterior probabilities; a human saying words is a signal, not direct knowledge, so them telling you their posterior probabilities may not do the trick (and they might not know them.)
You could say failing to satisfy 1-3 means that at least one of them is “doing something wrong”, but I think it’s a misleading stretch—failing to be normatively matched up to an arbitrary unobtainable mathematical structure is not what we usually call wrong. It stuck out to me as something that would put off readers with a bullshit detector, so I think it’d be worth fixing.
Thanks for the feedback.
Here’s the quote from the original article:
One could discuss whether Eliezer was right to appeal to AAT in a conversation like this, given that neither he nor his conversational partner are perfect Bayesians. I don’t think it’s entirely unfair to say that humans are flawed to the extent that we fail to live up to the ideal Bayesian standard (even if such a standard is unobtainable), so it’s not clear to me why it would be misleading to say that if two people have common knowledge of a disagreement, at least one (or both) of them are “doing something wrong”.
Nonetheless, I agree that it would be an improvement to at least be more clear about what Aumann’s Agreement Theorem actually says. So I will amend that part of the text.
Yeah; it’s not open/shut. I guess I’d say in the current phrasing, the “but Aumann’s Agreement Theorem shows that if two people disagree, at least one is doing something wrong.” is suggesting implications but not actually saying anything interesting—at least one of them is doing something wrong by this standard whether or not they agree. I think adding some more context to make people less suspicious they’re getting Eulered (http://slatestarcodex.com/2014/08/10/getting-eulered/) would be good.
I think this flaw is basically in the original article as well, though, so it’s also a struggle between accurately representing the source and adding editorial correction.
Nitpicks aside, want to say again that this is really great; thank you!