Exercise idea: provide a list of questions that have objectively right answers such as how many bones are there in the human body. Have each person on his own guess at the answers. Next, put everyone in a small group and have the groups discuss the questions and then have everyone individually redo their estimates. Finally, combined each small group with one or two others and repeat what was done before.
Tell everyone at the start of the exercise that their goal is not to just get the right answers but to also identify causes of persistent disagreements within their decision group.
I did this once in my game theory class and the students loved it, although I didn’t measure how much they learned from the exercise.
“Rational people can’t agree to disagree” is an oversimplification. Rational people can perfectly well reach a conclusion of the form: “Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn’t be much more likely to leave us both right than to leave us both wrong. We choose, instead, to leave the matter unresolved until either it matters more or we see better prospects of resolving it.”
Imperfectly rational people who are aware of their imperfect rationality (note: this is in fact the nearest any of us actually come to being rational people) might also reasonably reach a conclusion of this form: “Perhaps clear enough thinking on both sides would suffice to let us resolve this. However, it’s apparent that at least one of us is currently sufficiently irrational about it that trying to reach agreement poses a real danger of spoiling the good relations we currently enjoy, and while clearly that irrationality is a bad thing it doesn’t seem likely that trying to resolve our current disagreement now is the best way to address it, so let’s leave it for now.”
I suspect (with no actual evidence) that when two reasonably-rational people say they’re agreeing to disagree, what they mean is often approximately one of the above or a combination thereof, and that they’re often wise to “agree to disagree”. The fact that there are theorems saying that two perfect rationalists who care about nothing more than getting the right answer to the question they’re currently disputing won’t “agree to disagree” seems to me to have little bearing on this.
Eliezer, if you’re reading this: You may remember that a while back on OB you and Robin Hanson discussed the prospects of rapidly improving artificial intelligence in the nearish future. By no means did you resolve your differences in that discussion. Would it be fair to characterize the way it ended as “agreeing to disagree”? From the outside, it sure looks like that’s what it amounted to, whatever you may or may not have said to one another about it. Perhaps you and/or Robin might say “Yeah, but the other guy isn’t really rational about this”. Could be, but if the level of joint rationality required for “can’t agree to disagree” is higher than that of {Eliezer,Robin} then it’s not clear how widely applicable the principle “rational people can’t agree to disagree” really is. (Note for the avoidance of doubt: The foregoing is not intended to imply that Eliezer and Robin are equally rational; I do not intend to make any further comment on my opinions, if any, on that matter.)
Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn’t be much more likely to leave us both right than to leave us both wrong.
You say that as if resolving a disagreement means agreeing to both choose one side or the other. The most common result of cheaply resolving a disagreement is not “both right” or “both wrong”, but “both −3 decibels.”
No; in what I wrote “resolving a disagreement” means “agreeing to hold the same position, or something very close to it”.
Deciding “cheaply” that you’ll both set p=1/2 (note: I assume that’s what you mean by −3dB here, because the other interpretations I can think of don’t amount to “agreeing to disagree”) is no more rational than (even the least rational version of) “agreeing to disagree”.
If the evidence is very evenly balanced then of course you might end up doing that not-so-cheaply, but in such cases what more often happens is that you look at lots of evidence and see—or think you see—a gradual accumulation favouring one side.
Of course you could base your position purely on the number of people on each side of the issue, and then you might be able to reach p=1/2 (or something near it) cheaply and not entirely unprincipledly. Unfortunately, that procedure also tells you that Pr(Christianity) is somewhere around 1⁄4, a conclusion that I think most people here agree with me in regarding as silly. You can try to fix that by weighting people’s opinions according to how well they’re informed, how clever they are, how rational they are, etc. -- but then you once again have a lengthy, difficult and subjective task that you might reasonably worry will end up giving you a confident wrong answer.
I should perhaps clarify that what I mean by “wouldn’t be much more likely to leave us both right than to leave us both wrong” is: for each of the two people involved, who (at the outset) have quite different opinions, Pr(reach agreement on wrong answer | reach agreement) is quite high.
And, once again for the avoidance of doubt, I am not taking “reach agreement” to mean “reach agreement that one definite position or another is almost certainly right”. I just think that empirically, in practice, when people reach agreement with one another they more often do that than agree that Pr(each) ~= 1/2: I disagree with you about “the most common result” unless “cheaply” is taken in a sense that makes it irrelevant when discussing what rational people should do.
This is probably a myth. The Aumann agreement theorem does not apply to real life. Here are three reasons why:
It requires that the two rational people already share a partition function. [EDIT: No! Major mistake on my part. It requires that the two people have common knowledge of each others’ partition functions.] The range of the partition function is the set of sets of states of the world that an agent can’t distinguish between. That implies that, for all possible sets of observations, each agent knows what the other agent will infer. You could say it requires that the agents query each other endlessly about their beliefs, until they each know everything that the other agent believes.
Interpreting Aumann’s theorem to mean what Aumann said it means, requires saying that “The meet at w of the partitions of X and Y is a subset of event E” means the same as the English phrase “X knows that Y knows event E” means. That is wrong. To expland this language a little bit: Aumann claims: To say that agent 1 knows that agent 2 knows E means that E includes all P2 in N2 that intersect P1. I claim: To say that agent 1 knows that agent 2 knows E , means that E includes P1(w), and that E includes P2(w). Agent 1 can conclude that E includes P1 union P2, for some P2 that intersects P1. Not for all P2 that intersect P1. That is a fine semantic error buried deep within the English interpretation, but it makes the entire theorem worthless.
Even if you still believe that the Aumann agreement theorem applies in the way James states above, it relies on all agents being perfectly honest with each other, and (probably, tho I’d have to check this) on having mutual knowledge that they are being honest with each other.
“for all possible sets of observations, each agent knows what the other agent will infer.”
This is true if both agents are rational (and this is common knowledge) and share a common prior (and this is common knowledge). You can calculate what they would infer using Bayesian math.
If you are unsure about someone’s ability to observe data about the real world, then that’s another fact about the real world that you can have beliefs about. You shouldn’t have to talk endlessly about everything.
Part 1 seems to have little to do with how I remember the theorem. Here is the abstract of Aumann’s paper.
Two people, 1 and 2, are said to have common knowledge of an event $E$ if both know it, 1 knows that 2 knows it, 2 knows that 1 knows is, 1 knows that 2 knows that 1 knows it, and so on. THEOREM. If two people have the same priors, and their posteriors for an event $A$ are common knowledge, then these posteriors are equal.
Your 2 implies the following claim:
Agent 1 knows that agent 2 knows E if and only if agent 2 knows that agent 1 knows E.
This claim is obviously false.
Here is another corollary of your definition:
Suppose P1(w)={w,v}, P2(w)={w,u}. Then at w, I know E={w,v,u}, but at v, I do not know E! So I can distinguish between w and v by checking my knowledge, even though I cannot distinguish between w and v!
Part 3 is correct. Indeed, common knowledge of honesty is a requirement.
Agent 1 knows that agent 2 knows E if and only if agent 2 knows that agent 1 knows E.
How does it imply that? (It well might, within the context of the agreement theorem. My recollection is that you assume from the start that A1 and A2 have common knowledge of E.)
This claim is obviously false.
Why? If knowledge means “justified true belief”, then for agent 1 to know that agent 2 know E, agent 1 must also know E, and vice-versa. This doesn’t prove the claim that you say I am making, but goes most of the way towards proving it.
Suppose P1(w)={w,v}, P2(w)={w,u}. Then at w, I know E={w,v,u}, but at v, I do not know E! So I can distinguish between w and v by checking my knowledge, even though I cannot distinguish between w and v!
This is true, except that P1 and P2 should range over events, not over world states. This is a step that the theorem relies on. Are you claiming that this is false?
Terms: P1 means what Aumann calls P-superscript1; N1 means what Aumann calls cursive-P superscript 1. P1(E) = {w,v} means that, after observing event E, A1 knows that the world is in one of the states {w, v}. N1 is the set that describes the range of P1. E is an event, meaning a set of possible world states. Aumann doesn’t define what an ‘event’ is, other than implicitly in how he uses the variable E, so I hope I’m getting that right.
I’m constructing this partly from memory—sorry, this is a complex proof, and Aumann’s paper is skimpy on definitions, several of which (like “meet” and “join”) are left undefined and hard to find defined anywhere else even with Google. I really can’t do this justice without more free time than I have in the next several months.
What I think Aumann is saying is that, if A1 knows E, and knows that A2 knows E, then for every state x in P1(E), for every event D such that x is in P2(D) , P2(D) is a subset of E. Saying this allows Aumann to go on and show that A1 and A2 can iteratively rule out possibilities until they converge on believing the same thing.
This requires knowing more than what we mean when we say “A1 knows that A2 knows E”. When we say that, we mean that A1 knows the world is in one of the states in E, and knows that A2 knows the world is in one of the states in E. But it is possible that there is some state x, that the world is not in, but that is a member of E and of P1(E), but not of P2(E).
My recollection is that this is the problem: If you only consider conditions involving 3 possible world states, like the w, u, and v in the above example, then you can show that these things are equivalent. The agents can always use their mutual knowledge to iteratively eliminate possible states until they agree. For instance, if the situation is that P1({w,v,u})={w,v}, P2({w,v,u})={w,u}, then P1 and P2 can use their common knowledge to conclude w, and thus agree. But if you consider conditions where P1, P2, and E contain more than 3 different states between them, you can find situations that have multiple possible solutions, which the agents cannot choose between; and so cannot converge.
The definition you gave was symmetric. If I misread it, my apologies.
Why? If knowledge means “justified true belief”, then for agent 1 to know that agent 2 know E, agent 1 must also know E, and vice-versa. This doesn’t prove the claim that you say I am making, but goes most of the way towards proving it.
True, but it’s impossible to go the rest of the way. If you see a dog and I see both you and the dog through a one-way mirror, then I know that you know that there’s a dog there but you don’t know that I know that there is a dog.
I am having trouble matching up your notation with the notation I’m used to.
There are two operations, which I am used to calling P and K. They also have a number attached to them.
P takes sets to bigger sets or else to themselves. P1({w}) is what A1 thinks is possible when w is true. P1(S) for any set S is what A1 might think is possible given that something in S is true.
K takes sets to smaller sets or else to themselves. K1(S) is the set of possible states of the world where A1 knows that S is true.
What I think Aumann is saying is that, if A1 knows E, and knows that A2 knows E, then for every state x in P1(E), for every event D such that x is in P2(D) , P2(D) is a subset of E. Saying this allows Aumann to go on and show that A1 and A2 can iteratively rule out possibilities until they converge on believing the same thing.
That seems to translate to the statement:
Whenever E, A1 knows that A2 knows that E.
which is stronger than just:
In the current state w, A1 knows that A2 knows that E.
Unless your P is my K in which case it translates to “E is the whole space” because all x are in K(the whole space).
This requires knowing more than what we mean when we say “A1 knows that A2 knows E”.
Aumann’s theorem is based on common knowledge, which is the very strong statement that A1 knows E, and A2 knows that, and A1 knows that, and so on.
However it is easy to see where this can come from. For instance, if I say “I think that the sky is blue” then it’s essentially common knowledge that I said “I think that the sky is blue”
Is that the source of your confusion?
For instance, if the situation is that P1({w,v,u})={w,v}, P2({w,v,u})={w,u}, then P1 and P2 can use their common knowledge to conclude w, and thus agree.
You have P1 and P2 taking big things to small things which means that they are K.
But if you consider conditions where P1, P2, and E contain more than 3 different states between them, you can find situations that have multiple possible solutions, which the agents cannot choose between; and so cannot converge.
However they will be able to agree that it is one of those states. Moreover neither of them will have any greater information than that it’s one of those states. Argument occurs when I believe “A, not B” and you believe “B, not A”, not if we both believe “A or B”.
2 . Interpreting Aumann’s theorem to mean what Aumann said it means… That is a fine semantic error buried deep within the English interpretation, but it makes the entire theorem worthless.
That was way too densely packed for my sleep-deprived brain to parse. Would you be willing to write a post (possibly Discussion post) spelling this out less succinctly? It seems important to get this idea out into the LW-sphere given how much cred the agreement theorem has around here.
It’s not just densely packed—it makes no sense unless you read the paper first, and read some other things necessary to understand that paper. I’d like to write a post—but not right now.
I know enough game theory to prove versions of Aumann’s theorem, but I have not read the paper, and your point in (2) makes no sense, period.
The correct game-theoretic statement of 1 knows that 2 knows that E is that E includes P1(P2(w)).
The meet of X and Y is about common knowledge. Saying that E is common knowledge is stronger than saying that 1 knows that 2 knows it. It also implies, for instance, that 2 knows that 1 knows that 2 knows it.
I wouldn’t call that a skill so much as a frame of mind going into a discussion. Also, they can if they start with different arbitrary priors that neither one can assign objectivity.
Skill: learning that rational people can’t agree to disagree.
Exercise idea: provide a list of questions that have objectively right answers such as how many bones are there in the human body. Have each person on his own guess at the answers. Next, put everyone in a small group and have the groups discuss the questions and then have everyone individually redo their estimates. Finally, combined each small group with one or two others and repeat what was done before.
Tell everyone at the start of the exercise that their goal is not to just get the right answers but to also identify causes of persistent disagreements within their decision group.
I did this once in my game theory class and the students loved it, although I didn’t measure how much they learned from the exercise.
“Rational people can’t agree to disagree” is an oversimplification. Rational people can perfectly well reach a conclusion of the form: “Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn’t be much more likely to leave us both right than to leave us both wrong. We choose, instead, to leave the matter unresolved until either it matters more or we see better prospects of resolving it.”
Imperfectly rational people who are aware of their imperfect rationality (note: this is in fact the nearest any of us actually come to being rational people) might also reasonably reach a conclusion of this form: “Perhaps clear enough thinking on both sides would suffice to let us resolve this. However, it’s apparent that at least one of us is currently sufficiently irrational about it that trying to reach agreement poses a real danger of spoiling the good relations we currently enjoy, and while clearly that irrationality is a bad thing it doesn’t seem likely that trying to resolve our current disagreement now is the best way to address it, so let’s leave it for now.”
I suspect (with no actual evidence) that when two reasonably-rational people say they’re agreeing to disagree, what they mean is often approximately one of the above or a combination thereof, and that they’re often wise to “agree to disagree”. The fact that there are theorems saying that two perfect rationalists who care about nothing more than getting the right answer to the question they’re currently disputing won’t “agree to disagree” seems to me to have little bearing on this.
Eliezer, if you’re reading this: You may remember that a while back on OB you and Robin Hanson discussed the prospects of rapidly improving artificial intelligence in the nearish future. By no means did you resolve your differences in that discussion. Would it be fair to characterize the way it ended as “agreeing to disagree”? From the outside, it sure looks like that’s what it amounted to, whatever you may or may not have said to one another about it. Perhaps you and/or Robin might say “Yeah, but the other guy isn’t really rational about this”. Could be, but if the level of joint rationality required for “can’t agree to disagree” is higher than that of {Eliezer,Robin} then it’s not clear how widely applicable the principle “rational people can’t agree to disagree” really is. (Note for the avoidance of doubt: The foregoing is not intended to imply that Eliezer and Robin are equally rational; I do not intend to make any further comment on my opinions, if any, on that matter.)
You say that as if resolving a disagreement means agreeing to both choose one side or the other. The most common result of cheaply resolving a disagreement is not “both right” or “both wrong”, but “both −3 decibels.”
No; in what I wrote “resolving a disagreement” means “agreeing to hold the same position, or something very close to it”.
Deciding “cheaply” that you’ll both set p=1/2 (note: I assume that’s what you mean by −3dB here, because the other interpretations I can think of don’t amount to “agreeing to disagree”) is no more rational than (even the least rational version of) “agreeing to disagree”.
If the evidence is very evenly balanced then of course you might end up doing that not-so-cheaply, but in such cases what more often happens is that you look at lots of evidence and see—or think you see—a gradual accumulation favouring one side.
Of course you could base your position purely on the number of people on each side of the issue, and then you might be able to reach p=1/2 (or something near it) cheaply and not entirely unprincipledly. Unfortunately, that procedure also tells you that Pr(Christianity) is somewhere around 1⁄4, a conclusion that I think most people here agree with me in regarding as silly. You can try to fix that by weighting people’s opinions according to how well they’re informed, how clever they are, how rational they are, etc. -- but then you once again have a lengthy, difficult and subjective task that you might reasonably worry will end up giving you a confident wrong answer.
I should perhaps clarify that what I mean by “wouldn’t be much more likely to leave us both right than to leave us both wrong” is: for each of the two people involved, who (at the outset) have quite different opinions, Pr(reach agreement on wrong answer | reach agreement) is quite high.
And, once again for the avoidance of doubt, I am not taking “reach agreement” to mean “reach agreement that one definite position or another is almost certainly right”. I just think that empirically, in practice, when people reach agreement with one another they more often do that than agree that Pr(each) ~= 1/2: I disagree with you about “the most common result” unless “cheaply” is taken in a sense that makes it irrelevant when discussing what rational people should do.
This is probably a myth. The Aumann agreement theorem does not apply to real life. Here are three reasons why:
It requires that the two rational people already share a partition function. [EDIT: No! Major mistake on my part. It requires that the two people have common knowledge of each others’ partition functions.] The range of the partition function is the set of sets of states of the world that an agent can’t distinguish between. That implies that, for all possible sets of observations, each agent knows what the other agent will infer. You could say it requires that the agents query each other endlessly about their beliefs, until they each know everything that the other agent believes.
Interpreting Aumann’s theorem to mean what Aumann said it means, requires saying that “The meet at w of the partitions of X and Y is a subset of event E” means the same as the English phrase “X knows that Y knows event E” means. That is wrong. To expland this language a little bit: Aumann claims: To say that agent 1 knows that agent 2 knows E means that E includes all P2 in N2 that intersect P1. I claim: To say that agent 1 knows that agent 2 knows E , means that E includes P1(w), and that E includes P2(w). Agent 1 can conclude that E includes P1 union P2, for some P2 that intersects P1. Not for all P2 that intersect P1. That is a fine semantic error buried deep within the English interpretation, but it makes the entire theorem worthless.
Even if you still believe that the Aumann agreement theorem applies in the way James states above, it relies on all agents being perfectly honest with each other, and (probably, tho I’d have to check this) on having mutual knowledge that they are being honest with each other.
Here is a minor point:
“for all possible sets of observations, each agent knows what the other agent will infer.”
This is true if both agents are rational (and this is common knowledge) and share a common prior (and this is common knowledge). You can calculate what they would infer using Bayesian math.
If you are unsure about someone’s ability to observe data about the real world, then that’s another fact about the real world that you can have beliefs about. You shouldn’t have to talk endlessly about everything.
Part 1 seems to have little to do with how I remember the theorem. Here is the abstract of Aumann’s paper.
Your 2 implies the following claim:
Agent 1 knows that agent 2 knows E if and only if agent 2 knows that agent 1 knows E.
This claim is obviously false.
Here is another corollary of your definition:
Suppose P1(w)={w,v}, P2(w)={w,u}. Then at w, I know E={w,v,u}, but at v, I do not know E! So I can distinguish between w and v by checking my knowledge, even though I cannot distinguish between w and v!
Part 3 is correct. Indeed, common knowledge of honesty is a requirement.
How does it imply that? (It well might, within the context of the agreement theorem. My recollection is that you assume from the start that A1 and A2 have common knowledge of E.)
Why? If knowledge means “justified true belief”, then for agent 1 to know that agent 2 know E, agent 1 must also know E, and vice-versa. This doesn’t prove the claim that you say I am making, but goes most of the way towards proving it.
This is true, except that P1 and P2 should range over events, not over world states. This is a step that the theorem relies on. Are you claiming that this is false?
Terms: P1 means what Aumann calls P-superscript1; N1 means what Aumann calls cursive-P superscript 1. P1(E) = {w,v} means that, after observing event E, A1 knows that the world is in one of the states {w, v}. N1 is the set that describes the range of P1. E is an event, meaning a set of possible world states. Aumann doesn’t define what an ‘event’ is, other than implicitly in how he uses the variable E, so I hope I’m getting that right.
I’m constructing this partly from memory—sorry, this is a complex proof, and Aumann’s paper is skimpy on definitions, several of which (like “meet” and “join”) are left undefined and hard to find defined anywhere else even with Google. I really can’t do this justice without more free time than I have in the next several months.
What I think Aumann is saying is that, if A1 knows E, and knows that A2 knows E, then for every state x in P1(E), for every event D such that x is in P2(D) , P2(D) is a subset of E. Saying this allows Aumann to go on and show that A1 and A2 can iteratively rule out possibilities until they converge on believing the same thing.
This requires knowing more than what we mean when we say “A1 knows that A2 knows E”. When we say that, we mean that A1 knows the world is in one of the states in E, and knows that A2 knows the world is in one of the states in E. But it is possible that there is some state x, that the world is not in, but that is a member of E and of P1(E), but not of P2(E).
My recollection is that this is the problem: If you only consider conditions involving 3 possible world states, like the w, u, and v in the above example, then you can show that these things are equivalent. The agents can always use their mutual knowledge to iteratively eliminate possible states until they agree. For instance, if the situation is that P1({w,v,u})={w,v}, P2({w,v,u})={w,u}, then P1 and P2 can use their common knowledge to conclude w, and thus agree. But if you consider conditions where P1, P2, and E contain more than 3 different states between them, you can find situations that have multiple possible solutions, which the agents cannot choose between; and so cannot converge.
The definition you gave was symmetric. If I misread it, my apologies.
True, but it’s impossible to go the rest of the way. If you see a dog and I see both you and the dog through a one-way mirror, then I know that you know that there’s a dog there but you don’t know that I know that there is a dog.
I am having trouble matching up your notation with the notation I’m used to.
There are two operations, which I am used to calling P and K. They also have a number attached to them.
P takes sets to bigger sets or else to themselves. P1({w}) is what A1 thinks is possible when w is true. P1(S) for any set S is what A1 might think is possible given that something in S is true.
K takes sets to smaller sets or else to themselves. K1(S) is the set of possible states of the world where A1 knows that S is true.
That seems to translate to the statement:
Whenever E, A1 knows that A2 knows that E.
which is stronger than just:
In the current state w, A1 knows that A2 knows that E.
Unless your P is my K in which case it translates to “E is the whole space” because all x are in K(the whole space).
Aumann’s theorem is based on common knowledge, which is the very strong statement that A1 knows E, and A2 knows that, and A1 knows that, and so on.
However it is easy to see where this can come from. For instance, if I say “I think that the sky is blue” then it’s essentially common knowledge that I said “I think that the sky is blue”
Is that the source of your confusion?
You have P1 and P2 taking big things to small things which means that they are K.
However they will be able to agree that it is one of those states. Moreover neither of them will have any greater information than that it’s one of those states. Argument occurs when I believe “A, not B” and you believe “B, not A”, not if we both believe “A or B”.
That was way too densely packed for my sleep-deprived brain to parse. Would you be willing to write a post (possibly Discussion post) spelling this out less succinctly? It seems important to get this idea out into the LW-sphere given how much cred the agreement theorem has around here.
It’s not just densely packed—it makes no sense unless you read the paper first, and read some other things necessary to understand that paper. I’d like to write a post—but not right now.
I know enough game theory to prove versions of Aumann’s theorem, but I have not read the paper, and your point in (2) makes no sense, period.
The correct game-theoretic statement of 1 knows that 2 knows that E is that E includes P1(P2(w)).
The meet of X and Y is about common knowledge. Saying that E is common knowledge is stronger than saying that 1 knows that 2 knows it. It also implies, for instance, that 2 knows that 1 knows that 2 knows it.
I wouldn’t call that a skill so much as a frame of mind going into a discussion. Also, they can if they start with different arbitrary priors that neither one can assign objectivity.
Well, they can—if they have different goals.