Try a concrete example: Two dice are thrown, and each agent learns one die’s value. In addition, each learns whether the other die is in the range 1-3 vs 4-6. Now what can we say about the sum of the dice?
Suppose player 1 sees a 2 and learns that player 2′s die is in 1-3. Then he knows that player 2 knows that player 1′s die is in 1-3. It is common knowledge that the sum is in 2-6.
You could graph it by drawing a 6x6 grid and circling the information partition of player 1 in one color, and player 2 in another color. You will find that the meet is a partition of 4 elements, each a 3x3 grid in one of the corners.
In general, anything which is common knowledge will limit the meet—that is, the meet partition the world is in will not extend to include world-states which contradict what is common knowledge. If 2 people disagree about global warming, it is probably common knowledge what the current CO2 level is and what the historical record of that level is. They agree on this data and each knows that the other agrees, etc.
The thrust of the theorem though is not what is common knowledge before, but what is common knowledge after. The claim is that it cannot be common knowledge that the two parties disagree.
What I don’t like about the example you provide is: what player 1 and player 2 know needs to be common knowledge. For instance if player 1 doesn’t know whether player 2 knows whether die 1 is in 1-3, then it may not be common knowledge at all that the sum is in 2-6, even if player 1 and player 2 are given the info you said they’re given.
This is what I was confused about in the grandparent comment: do we really need I and J to be common knowledge? It seems so to me. But that seems to be another assumption limiting the applicability of the result.
Not sure… what happens when the ranges are different sizes, or otherwise the type of information learnable by each player is different in non symmetric ways?
Anyways, thanks, upon another reading of your comment, I think I’m starting to get it a bit.
Different size ranges in Hal’s example? Nothing in particular happens. It’s ok for different random variables to have different ranges.
Otoh, if the players get different ranges about a single random variable, then they could have problems.
Suppose there is one d6. Player A learns whether it is in 1-2, 3-4, or 5-6. Player B learns whether it is in 1-3 or 4-6. And suppose the actual value is 1. Then A knows it’s 1-2. So A knows B knows it’s 1-3. But A reasons that B reasons that if it were 3 then A would know it’s 3-4, so A knows B knows A knows it’s 1-4. But A reasons that B reasons that A reasons that if it were 4 then B would know it’s 4-6, so A knows B knows A knows B knows it’s 1-6. So there is no common knowledge, i.e. I∧J=Ω. (Omitting the argument w, since if this is true then it’s true for all w.)
And if it were a d12, with ranges still size 2 and 3, then the partitions line up at one point, so the meet stops at {1-6, 7-12}.
Try a concrete example: Two dice are thrown, and each agent learns one die’s value. In addition, each learns whether the other die is in the range 1-3 vs 4-6. Now what can we say about the sum of the dice?
Suppose player 1 sees a 2 and learns that player 2′s die is in 1-3. Then he knows that player 2 knows that player 1′s die is in 1-3. It is common knowledge that the sum is in 2-6.
You could graph it by drawing a 6x6 grid and circling the information partition of player 1 in one color, and player 2 in another color. You will find that the meet is a partition of 4 elements, each a 3x3 grid in one of the corners.
In general, anything which is common knowledge will limit the meet—that is, the meet partition the world is in will not extend to include world-states which contradict what is common knowledge. If 2 people disagree about global warming, it is probably common knowledge what the current CO2 level is and what the historical record of that level is. They agree on this data and each knows that the other agrees, etc.
The thrust of the theorem though is not what is common knowledge before, but what is common knowledge after. The claim is that it cannot be common knowledge that the two parties disagree.
What I don’t like about the example you provide is: what player 1 and player 2 know needs to be common knowledge. For instance if player 1 doesn’t know whether player 2 knows whether die 1 is in 1-3, then it may not be common knowledge at all that the sum is in 2-6, even if player 1 and player 2 are given the info you said they’re given.
This is what I was confused about in the grandparent comment: do we really need I and J to be common knowledge? It seems so to me. But that seems to be another assumption limiting the applicability of the result.
Not sure… what happens when the ranges are different sizes, or otherwise the type of information learnable by each player is different in non symmetric ways?
Anyways, thanks, upon another reading of your comment, I think I’m starting to get it a bit.
Different size ranges in Hal’s example? Nothing in particular happens. It’s ok for different random variables to have different ranges.
Otoh, if the players get different ranges about a single random variable, then they could have problems. Suppose there is one d6. Player A learns whether it is in 1-2, 3-4, or 5-6. Player B learns whether it is in 1-3 or 4-6.
And suppose the actual value is 1.
Then A knows it’s 1-2. So A knows B knows it’s 1-3. But A reasons that B reasons that if it were 3 then A would know it’s 3-4, so A knows B knows A knows it’s 1-4. But A reasons that B reasons that A reasons that if it were 4 then B would know it’s 4-6, so A knows B knows A knows B knows it’s 1-6. So there is no common knowledge, i.e. I∧J=Ω. (Omitting the argument w, since if this is true then it’s true for all w.)
And if it were a d12, with ranges still size 2 and 3, then the partitions line up at one point, so the meet stops at {1-6, 7-12}.