As I wrote above, in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, people do actually behave rationally. As an example: Bidding for oil drilling rights can be modelled as auctions with incomplete and imperfect information. Naïve bidding strategies fall prey to the winner’s curse. Game theory can model these situations as Bayesian games and compute the emerging Bayesian Nash Equilibria.
in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, people do actually behave rationally.
I still don’t think so. To be a bit more precise, certainly people behave rationally sometimes and I will agree that things like long deliberations or joint decisions (given sufficient diversity of the deciding group) tend to increase the rationality. But I don’t think that even in the limit assuming rationality is a “safe” or a “fine” assumption.
Example: international politics. Another example: organized religions.
I also think that in analyzing this issue there is the danger of constructing rational narratives post-factum via the claim of revealed preferences. Let’s say entity A decides to do B. It’s very tempting to say “Aha! It would be rational for A to decide to do B if A really wants X, therefore A wants X and behaves rationally”. And certainly, that happens like that on a regular basis. However what also happens is that A really wants Y and decides to do B on non-rational grounds or just makes a mistake. In this case our analysis of A’s rationality is false, but it’s hard for us to detect that without knowing whether A really wants X or Y.
That may well be so, but this is a rather different claim than the “Rational Choice assumption”.
We know quite well that people are not rational. Why would you model them as rational agents in game theory?
As I wrote above, in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, people do actually behave rationally. As an example: Bidding for oil drilling rights can be modelled as auctions with incomplete and imperfect information. Naïve bidding strategies fall prey to the winner’s curse. Game theory can model these situations as Bayesian games and compute the emerging Bayesian Nash Equilibria.
Guess what? The companies actually bid the way game theory predicts!
I still don’t think so. To be a bit more precise, certainly people behave rationally sometimes and I will agree that things like long deliberations or joint decisions (given sufficient diversity of the deciding group) tend to increase the rationality. But I don’t think that even in the limit assuming rationality is a “safe” or a “fine” assumption.
Example: international politics. Another example: organized religions.
I also think that in analyzing this issue there is the danger of constructing rational narratives post-factum via the claim of revealed preferences. Let’s say entity A decides to do B. It’s very tempting to say “Aha! It would be rational for A to decide to do B if A really wants X, therefore A wants X and behaves rationally”. And certainly, that happens like that on a regular basis. However what also happens is that A really wants Y and decides to do B on non-rational grounds or just makes a mistake. In this case our analysis of A’s rationality is false, but it’s hard for us to detect that without knowing whether A really wants X or Y.