But that’s not a problem for game theory because the Rational Choice assumption (perhaps allowing small deviations) is perfectly fine in the real world
Unless you have more specific information about the problem in question, it’s the best concept to consider. At least in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, the assumption holds. Although thinking about it, I’d really like to see game theory for predictably irrational agents, suffering from exactly those biases untrained humans fall prey to.
Let’s consider the top headlines of the moment: the Russian separatists in the Ukraine shot down a passenger jet and the IDF invaded Gaza. Both situations (the separatist movement and the Middle Eastern conflict) could be modeled in the game theory framework. Would you be comfortable applying the “Rational Choice assumption” to these situations?
I would attribute the shooting of the passenger jet to incompetence; The IDF invading Gaza yet again certainly makes sense from their perspective.
Considering the widespread false information in both cases, I’d argue that by and large, the agents (mostly the larger ones like Russia and Israel, less so the separatists and the palestine fighters) act rationally on the information they have. Take a look at Russia, neither actively fighting the separatists nor openly supporting them. I could argue that this is the best strategy for territorial expansion, avoiding a UN mission while strengthening the separatists. Spreading false information does its part.
I don’t know enough about the palestine fighters and the information they act on to evaluate whether or not their behaviour makes sense.
I only consider instrumental rationality here, not epistemic rationality.
As I wrote above, in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, people do actually behave rationally. As an example: Bidding for oil drilling rights can be modelled as auctions with incomplete and imperfect information. Naïve bidding strategies fall prey to the winner’s curse. Game theory can model these situations as Bayesian games and compute the emerging Bayesian Nash Equilibria.
in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, people do actually behave rationally.
I still don’t think so. To be a bit more precise, certainly people behave rationally sometimes and I will agree that things like long deliberations or joint decisions (given sufficient diversity of the deciding group) tend to increase the rationality. But I don’t think that even in the limit assuming rationality is a “safe” or a “fine” assumption.
Example: international politics. Another example: organized religions.
I also think that in analyzing this issue there is the danger of constructing rational narratives post-factum via the claim of revealed preferences. Let’s say entity A decides to do B. It’s very tempting to say “Aha! It would be rational for A to decide to do B if A really wants X, therefore A wants X and behaves rationally”. And certainly, that happens like that on a regular basis. However what also happens is that A really wants Y and decides to do B on non-rational grounds or just makes a mistake. In this case our analysis of A’s rationality is false, but it’s hard for us to detect that without knowing whether A really wants X or Y.
Not in the real world I’m familiar with.
Unless you have more specific information about the problem in question, it’s the best concept to consider. At least in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, the assumption holds. Although thinking about it, I’d really like to see game theory for predictably irrational agents, suffering from exactly those biases untrained humans fall prey to.
I am not convinced about that at all.
Let’s consider the top headlines of the moment: the Russian separatists in the Ukraine shot down a passenger jet and the IDF invaded Gaza. Both situations (the separatist movement and the Middle Eastern conflict) could be modeled in the game theory framework. Would you be comfortable applying the “Rational Choice assumption” to these situations?
I would attribute the shooting of the passenger jet to incompetence; The IDF invading Gaza yet again certainly makes sense from their perspective.
Considering the widespread false information in both cases, I’d argue that by and large, the agents (mostly the larger ones like Russia and Israel, less so the separatists and the palestine fighters) act rationally on the information they have. Take a look at Russia, neither actively fighting the separatists nor openly supporting them. I could argue that this is the best strategy for territorial expansion, avoiding a UN mission while strengthening the separatists. Spreading false information does its part.
I don’t know enough about the palestine fighters and the information they act on to evaluate whether or not their behaviour makes sense.
I only consider instrumental rationality here, not epistemic rationality.
That may well be so, but this is a rather different claim than the “Rational Choice assumption”.
We know quite well that people are not rational. Why would you model them as rational agents in game theory?
As I wrote above, in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, people do actually behave rationally. As an example: Bidding for oil drilling rights can be modelled as auctions with incomplete and imperfect information. Naïve bidding strategies fall prey to the winner’s curse. Game theory can model these situations as Bayesian games and compute the emerging Bayesian Nash Equilibria.
Guess what? The companies actually bid the way game theory predicts!
I still don’t think so. To be a bit more precise, certainly people behave rationally sometimes and I will agree that things like long deliberations or joint decisions (given sufficient diversity of the deciding group) tend to increase the rationality. But I don’t think that even in the limit assuming rationality is a “safe” or a “fine” assumption.
Example: international politics. Another example: organized religions.
I also think that in analyzing this issue there is the danger of constructing rational narratives post-factum via the claim of revealed preferences. Let’s say entity A decides to do B. It’s very tempting to say “Aha! It would be rational for A to decide to do B if A really wants X, therefore A wants X and behaves rationally”. And certainly, that happens like that on a regular basis. However what also happens is that A really wants Y and decides to do B on non-rational grounds or just makes a mistake. In this case our analysis of A’s rationality is false, but it’s hard for us to detect that without knowing whether A really wants X or Y.