Roko, when you run into a case of “group win / individual loss” on epistemic rationality you should consider that a Can’t Happen, like violating conservation of momentum or something.
In this case, you need to communicate one kind of information (likelihood ratios) and update on the product of those likelihood ratios, rather than trying to communicate the final belief. But the Can’t Happen is a general rule.
Roko, when you run into a case of “group win / individual loss” on epistemic rationality you should consider that a Can’t Happen, like violating conservation of momentum or something.
Really!? No exceptions?
This doesn’t feel right. If it is right, it sounds important. Please could you elaborate?
Roko, when you run into a case of “group win / individual loss” on epistemic rationality you should consider that a Can’t Happen, like violating conservation of momentum or something.
In this case, you need to communicate one kind of information (likelihood ratios) and update on the product of those likelihood ratios, rather than trying to communicate the final belief. But the Can’t Happen is a general rule.
I’d like to see a proof if it’s that fundamental. Is it theorem x.xx in one of the Aumann agreement papers?
Really!? No exceptions?
This doesn’t feel right. If it is right, it sounds important. Please could you elaborate?