“In a way, the difficulty of group rationality makes sense. After all, rationality (or the potential for it) is almost a defining characteristic of individuality. If individuals from a certain group always acted for the good of the group, then what makes them individuals, rather than interchangeable parts of a single entity? For example, in Star Trek, don’t we see a Borg cube as one individual precisely because it is too rational as a group? Since achieving perfect Borg-like group rationality presumably isn’t what we want anyway, maybe settling for second best isn’t so bad.”
An intriguing statement. However, you can extend it in the other direction, inside a person. A group has different people with different values, which therefore fail to achieve optimal satisfaction of everyone’s values. An “individual” is composed of different subsystems trying to optimize different things, and the individual can’t optimize them all. This is an intrinsic property of life / optimizers / intelligence. I don’t think you can use it to define the level at which individuality exists. (In fact, I think trying to define a single such level is hopelessly wrongheaded.) If you did, I would not be an individual.
An “individual” is composed of different subsystems trying to optimize different things, and the individual can’t optimize them all.
I don’t think you can deny that there is indeed a huge gap between group rationality and individual rationality. As individuals, we’re trying to better approximate Bayesian rationality and expected utility maximization, whereas as groups, we’re still struggling to get closer to Pareto-efficiency.
An interesting question is why this gap exists, given that an individual is also composed of different subsystems trying to optimize different things. I can see at least three reasons:
The subsystems within an individual are stuck with each other for life. So they’re playing a version of indefinitely iterated Prisoner’s Dilemma with a very low probability of ending after each round. That makes cooperation easier.
The subsystems all have access to a common pool of memory, which reduces the asymmetric information problem that plagues groups of individuals.
Ultimately, the fates of all the subsystems are tied together. There is no reason for evolution to have designed them to be truly selfish. That they optimize different values is a heuristic which maximized overall fitness, so it stands to reason that the combined effect of the subsystems would be a fair approximation of rationality.
Also, most subsystems are not just boundedly rational, they have fairly easily characterized hard bounds on their rationality. Boundedly rational individuals have expansions like paper and pencils that enable them to think more steps ahead, albeit at a cost, while the boundedly rational agents of which I am composed, at least most of them, simply can’t trade off resources for deeper analysis at all, making their behavior relatively predictable to one another.
“In a way, the difficulty of group rationality makes sense. After all, rationality (or the potential for it) is almost a defining characteristic of individuality. If individuals from a certain group always acted for the good of the group, then what makes them individuals, rather than interchangeable parts of a single entity? For example, in Star Trek, don’t we see a Borg cube as one individual precisely because it is too rational as a group? Since achieving perfect Borg-like group rationality presumably isn’t what we want anyway, maybe settling for second best isn’t so bad.”
An intriguing statement. However, you can extend it in the other direction, inside a person. A group has different people with different values, which therefore fail to achieve optimal satisfaction of everyone’s values. An “individual” is composed of different subsystems trying to optimize different things, and the individual can’t optimize them all. This is an intrinsic property of life / optimizers / intelligence. I don’t think you can use it to define the level at which individuality exists. (In fact, I think trying to define a single such level is hopelessly wrongheaded.) If you did, I would not be an individual.
I don’t think you can deny that there is indeed a huge gap between group rationality and individual rationality. As individuals, we’re trying to better approximate Bayesian rationality and expected utility maximization, whereas as groups, we’re still struggling to get closer to Pareto-efficiency.
An interesting question is why this gap exists, given that an individual is also composed of different subsystems trying to optimize different things. I can see at least three reasons:
The subsystems within an individual are stuck with each other for life. So they’re playing a version of indefinitely iterated Prisoner’s Dilemma with a very low probability of ending after each round. That makes cooperation easier.
The subsystems all have access to a common pool of memory, which reduces the asymmetric information problem that plagues groups of individuals.
Ultimately, the fates of all the subsystems are tied together. There is no reason for evolution to have designed them to be truly selfish. That they optimize different values is a heuristic which maximized overall fitness, so it stands to reason that the combined effect of the subsystems would be a fair approximation of rationality.
Also, most subsystems are not just boundedly rational, they have fairly easily characterized hard bounds on their rationality. Boundedly rational individuals have expansions like paper and pencils that enable them to think more steps ahead, albeit at a cost, while the boundedly rational agents of which I am composed, at least most of them, simply can’t trade off resources for deeper analysis at all, making their behavior relatively predictable to one another.