Let’s say your prior is P and mine is Q. I take your argument to be that P always prefers bets made according to P (bets made according to Q are at best just as good). But this is only true if P thinks P knows better than Q.
It’s perfectly possible for P to think Q knows better. For example, P might think Q just knows all the facts. Then it must be that P doesn’t know Q (or else P would also know all the facts.) But given the opportunity to learn Q, P would prefer to do so; whereupon, the updated P would be equal to Q.
Similar things can happen in less extreme circumstances, where Q is merely expected to know some things that P doesn’t. P could still prefer to switch entirely over to Q’s beliefs, because they have a higher expected value. It’s also possible that P trusts Q only to an extent, so P moves closer to Q but does not move all the way. This can even be true in the Aumann agreement setting: P and Q can both move to a new distribution R, because P has some new information for Q, but Q also has some new information for P. (In general, R need not even be a ‘compromise’ between P and Q; it could be something totally different.)
So it isn’t crazy at all for rational agents to prefer each other’s beliefs.
A weaker form of the common prior assumption could assert that this is always the case: two rational agents need not have the same priors, but upon learning each other’s priors, would then come to agree. (Either P updates to Q, or Q updates to P, or P and Q together update to some R.)
This isn’t always true!
Let’s say your prior is P and mine is Q. I take your argument to be that P always prefers bets made according to P (bets made according to Q are at best just as good). But this is only true if P thinks P knows better than Q.
It’s perfectly possible for P to think Q knows better. For example, P might think Q just knows all the facts. Then it must be that P doesn’t know Q (or else P would also know all the facts.) But given the opportunity to learn Q, P would prefer to do so; whereupon, the updated P would be equal to Q.
Similar things can happen in less extreme circumstances, where Q is merely expected to know some things that P doesn’t. P could still prefer to switch entirely over to Q’s beliefs, because they have a higher expected value. It’s also possible that P trusts Q only to an extent, so P moves closer to Q but does not move all the way. This can even be true in the Aumann agreement setting: P and Q can both move to a new distribution R, because P has some new information for Q, but Q also has some new information for P. (In general, R need not even be a ‘compromise’ between P and Q; it could be something totally different.)
So it isn’t crazy at all for rational agents to prefer each other’s beliefs.
A weaker form of the common prior assumption could assert that this is always the case: two rational agents need not have the same priors, but upon learning each other’s priors, would then come to agree. (Either P updates to Q, or Q updates to P, or P and Q together update to some R.)