An “individual” is composed of different subsystems trying to optimize different things, and the individual can’t optimize them all.
I don’t think you can deny that there is indeed a huge gap between group rationality and individual rationality. As individuals, we’re trying to better approximate Bayesian rationality and expected utility maximization, whereas as groups, we’re still struggling to get closer to Pareto-efficiency.
An interesting question is why this gap exists, given that an individual is also composed of different subsystems trying to optimize different things. I can see at least three reasons:
The subsystems within an individual are stuck with each other for life. So they’re playing a version of indefinitely iterated Prisoner’s Dilemma with a very low probability of ending after each round. That makes cooperation easier.
The subsystems all have access to a common pool of memory, which reduces the asymmetric information problem that plagues groups of individuals.
Ultimately, the fates of all the subsystems are tied together. There is no reason for evolution to have designed them to be truly selfish. That they optimize different values is a heuristic which maximized overall fitness, so it stands to reason that the combined effect of the subsystems would be a fair approximation of rationality.
Also, most subsystems are not just boundedly rational, they have fairly easily characterized hard bounds on their rationality. Boundedly rational individuals have expansions like paper and pencils that enable them to think more steps ahead, albeit at a cost, while the boundedly rational agents of which I am composed, at least most of them, simply can’t trade off resources for deeper analysis at all, making their behavior relatively predictable to one another.
I don’t think you can deny that there is indeed a huge gap between group rationality and individual rationality. As individuals, we’re trying to better approximate Bayesian rationality and expected utility maximization, whereas as groups, we’re still struggling to get closer to Pareto-efficiency.
An interesting question is why this gap exists, given that an individual is also composed of different subsystems trying to optimize different things. I can see at least three reasons:
The subsystems within an individual are stuck with each other for life. So they’re playing a version of indefinitely iterated Prisoner’s Dilemma with a very low probability of ending after each round. That makes cooperation easier.
The subsystems all have access to a common pool of memory, which reduces the asymmetric information problem that plagues groups of individuals.
Ultimately, the fates of all the subsystems are tied together. There is no reason for evolution to have designed them to be truly selfish. That they optimize different values is a heuristic which maximized overall fitness, so it stands to reason that the combined effect of the subsystems would be a fair approximation of rationality.
Also, most subsystems are not just boundedly rational, they have fairly easily characterized hard bounds on their rationality. Boundedly rational individuals have expansions like paper and pencils that enable them to think more steps ahead, albeit at a cost, while the boundedly rational agents of which I am composed, at least most of them, simply can’t trade off resources for deeper analysis at all, making their behavior relatively predictable to one another.