An interesting new article, “Cooperation and the evolution of intelligence”, uses a simple one-hidden-layer neural network to study the selection for intelligence in iterated prisoners’ dilemma and iterated snowdrift dilemma games.
The article claims that increased intelligence decreased cooperation in IPD, and increased cooperation in ISD. However, if you look at figure 4 which graphs that data, you’ll see that on average it decreased cooperation in both cases. They state that it increased cooperation in ISD based on a Spearman rank test. This test is deceptive in this case, because it ignores the magnitude of differences between datapoints, and so the datapoints on the right with a tiny but consistent increase in cooperation outweigh the datapoints on the left with large decreases in cooperation.
This suggests that intelligence is an externality, like pollution. Something that benefits the individual at a cost to society. They posit the evolution of intelligence as an arms race between members of the species.
ADDED: The things we consider good generally require intelligence, if we suppose (as I expect) that consciousness requires intelligence. So it wouldn’t even make sense to conclude that intelligence is bad. Plus, intelligence itself might count as a good.
However, humans and human societies are currently near some evolutionary equilibrium. It’s very possible that individual intelligence has not evolved past its current levels because it is at an equilibrium, beyond which higher individual intelligence results in lower social utility. In fact, if you believe SIAI’s narrative about the danger of artificial intelligence and the difficulty of friendly AI, I think you would have to conclude that higher individual intelligence results in lower expected social utility, for human measures of utility.
Intelligence as a bad
An interesting new article, “Cooperation and the evolution of intelligence”, uses a simple one-hidden-layer neural network to study the selection for intelligence in iterated prisoners’ dilemma and iterated snowdrift dilemma games.
The article claims that increased intelligence decreased cooperation in IPD, and increased cooperation in ISD. However, if you look at figure 4 which graphs that data, you’ll see that on average it decreased cooperation in both cases. They state that it increased cooperation in ISD based on a Spearman rank test. This test is deceptive in this case, because it ignores the magnitude of differences between datapoints, and so the datapoints on the right with a tiny but consistent increase in cooperation outweigh the datapoints on the left with large decreases in cooperation.
This suggests that intelligence is an externality, like pollution. Something that benefits the individual at a cost to society. They posit the evolution of intelligence as an arms race between members of the species.
ADDED: The things we consider good generally require intelligence, if we suppose (as I expect) that consciousness requires intelligence. So it wouldn’t even make sense to conclude that intelligence is bad. Plus, intelligence itself might count as a good.
However, humans and human societies are currently near some evolutionary equilibrium. It’s very possible that individual intelligence has not evolved past its current levels because it is at an equilibrium, beyond which higher individual intelligence results in lower social utility. In fact, if you believe SIAI’s narrative about the danger of artificial intelligence and the difficulty of friendly AI, I think you would have to conclude that higher individual intelligence results in lower expected social utility, for human measures of utility.