One question on your objections: how would you characterize the state of two human rationalist wannabes who have failed to reach agreement? Would you say that their disagreement is common knowledge, or instead are they uncertain if they have a disagreement?
ISTM that people usually find themselves rather certain that they are in disagreement and that this is common knowledge. Aumann’s theorem seems to forbid this even if we assume that the calculations are intractable.
The rational way to characterize the situation, if in fact intractability is a practical objection, would be that each party says he is unsure of what his opinion should be, because the information is too complex for him to make a decision. If circumstances force him to adopt a belief to act on, maybe it is rational for the two to choose different actions, but they should admit that they do not really have good grounds to assume that their choice is better than the other person’s. Hence they really are not certain that they are in disagreement, in accordance with the theorem. Again this is in striking contrast to actual human behavior even among wannabes.
One question on your objections: how would you characterize the state of two human rationalist wannabes who have failed to reach agreement?
I would say that one possibility is that their disagreement is common knowledge, but they don’t know how to reach agreement. From what I’ve learned so far, disagreements between rationalist wannabes can arise from 3 sources:
different priors
different computational shortcuts/approximations/errors
incomplete exchange of information
Even if the two rationalist wannabes agree that in principle they should have the same priors and the same computations, and full exchange of information, as of today they do not have general methods to solve any of these problems, can only try to work out their differences on a case-by-case basis, with high likelihood that they’ll have to give up at some point before they reach agreement.
Again this is in striking contrast to actual human behavior even among wannabes.
Your suggestion of what rationalist wannabes should do intuitively makes a lot of sense to me. But perhaps one reason people don’t do it is because they don’t know that it is what they should do? I don’t recall a post here or on OB that argued for this position, for example.
Why not? They both know they disagree, they both know they both know they disagree, etc… Perhaps Agent 1 doesn’t know 2′s partitioning, or vice versa. Or perhaps their partitionings are common knowledge, but they lack the (computational ability) to actually determine the meet, for example, no?
Wei was hypothesising disagreement due to an incomplete exchange of information. In which case, the parties both know that they disagree, but don’t have the time/energy/resources to sort each other’s opinions out. Then Aumann’s idea doesn’t really apply.
Aaah, okay. Though presumably at least one would know the probabilities that both assigned (and said “I disagree”...) that is, it would generally take a bit of a contrived situation for them to know they disagree, but neither to know anything about the other’s probability other than that it’s different.
(What happens if the successfully exchange probabilities, have unbounded computing power, they have shared common knowledge priors… But they don’t know each other’s partitioning… Or would the latter automatically be computed from the rest?)
Well, if they do know each other’s partitions and are computationally unbounded, then they would reach agreement after one step, wouldn’t they? (or did I misunderstand the theorem?)
Or do you mean If they don’t know each other’s partitions, iterative exchange of updated probabilities effectively transmits the needed information?
One question on your objections: how would you characterize the state of two human rationalist wannabes who have failed to reach agreement? Would you say that their disagreement is common knowledge, or instead are they uncertain if they have a disagreement?
ISTM that people usually find themselves rather certain that they are in disagreement and that this is common knowledge. Aumann’s theorem seems to forbid this even if we assume that the calculations are intractable.
The rational way to characterize the situation, if in fact intractability is a practical objection, would be that each party says he is unsure of what his opinion should be, because the information is too complex for him to make a decision. If circumstances force him to adopt a belief to act on, maybe it is rational for the two to choose different actions, but they should admit that they do not really have good grounds to assume that their choice is better than the other person’s. Hence they really are not certain that they are in disagreement, in accordance with the theorem. Again this is in striking contrast to actual human behavior even among wannabes.
I would say that one possibility is that their disagreement is common knowledge, but they don’t know how to reach agreement. From what I’ve learned so far, disagreements between rationalist wannabes can arise from 3 sources:
different priors
different computational shortcuts/approximations/errors
incomplete exchange of information
Even if the two rationalist wannabes agree that in principle they should have the same priors and the same computations, and full exchange of information, as of today they do not have general methods to solve any of these problems, can only try to work out their differences on a case-by-case basis, with high likelihood that they’ll have to give up at some point before they reach agreement.
Your suggestion of what rationalist wannabes should do intuitively makes a lot of sense to me. But perhaps one reason people don’t do it is because they don’t know that it is what they should do? I don’t recall a post here or on OB that argued for this position, for example.
You mean “common knowledge” in the technical sense described in the post?
If so, your questions do not appear to make sense.
Why not? They both know they disagree, they both know they both know they disagree, etc… Perhaps Agent 1 doesn’t know 2′s partitioning, or vice versa. Or perhaps their partitionings are common knowledge, but they lack the (computational ability) to actually determine the meet, for example, no?
Wei was hypothesising disagreement due to an incomplete exchange of information. In which case, the parties both know that they disagree, but don’t have the time/energy/resources to sort each other’s opinions out. Then Aumann’s idea doesn’t really apply.
Aaah, okay. Though presumably at least one would know the probabilities that both assigned (and said “I disagree”...) that is, it would generally take a bit of a contrived situation for them to know they disagree, but neither to know anything about the other’s probability other than that it’s different.
(What happens if the successfully exchange probabilities, have unbounded computing power, they have shared common knowledge priors… But they don’t know each other’s partitioning… Or would the latter automatically be computed from the rest?)
Just one round of comparing probabilities is not normally enough for the parties involved to reach agreement, though.
Well, if they do know each other’s partitions and are computationally unbounded, then they would reach agreement after one step, wouldn’t they? (or did I misunderstand the theorem?)
Or do you mean If they don’t know each other’s partitions, iterative exchange of updated probabilities effectively transmits the needed information?