I don’t think that can be true in general. One of my examples had someone invoking Aumann’s agreement theorem as follows:
So it seems to me that the Aumann’s Agreement Theorem is irrelevant in the real life… until you gain enough rationality and social skills to find and recognize other rational people, and to gain their trust.
Interpreting “rational people” in a quantitative, “more rational than the usual standard” sense there won’t work, because Aumann’s agreement theorem assumes perfect Bayesian rationality, not merely better-than-usual rationality. I reckon the sentence I quoted is just plain false unless one interprets “rational people” in an absolute sense.
Yes, that statement is just plain false. The problem behind this is people referring to game theoretic agents as “[perfectly] rational people”, and then others hearing them assuming that the ‘rational people’ in game theory are the same kind as real ‘rational people’.
I don’t think that can be true in general. One of my examples had someone invoking Aumann’s agreement theorem as follows:
Interpreting “rational people” in a quantitative, “more rational than the usual standard” sense there won’t work, because Aumann’s agreement theorem assumes perfect Bayesian rationality, not merely better-than-usual rationality. I reckon the sentence I quoted is just plain false unless one interprets “rational people” in an absolute sense.
Yes, that statement is just plain false. The problem behind this is people referring to game theoretic agents as “[perfectly] rational people”, and then others hearing them assuming that the ‘rational people’ in game theory are the same kind as real ‘rational people’.