Intelligence as a bad
An interesting new article, “Cooperation and the evolution of intelligence”, uses a simple one-hidden-layer neural network to study the selection for intelligence in iterated prisoners’ dilemma and iterated snowdrift dilemma games.
The article claims that increased intelligence decreased cooperation in IPD, and increased cooperation in ISD. However, if you look at figure 4 which graphs that data, you’ll see that on average it decreased cooperation in both cases. They state that it increased cooperation in ISD based on a Spearman rank test. This test is deceptive in this case, because it ignores the magnitude of differences between datapoints, and so the datapoints on the right with a tiny but consistent increase in cooperation outweigh the datapoints on the left with large decreases in cooperation.
This suggests that intelligence is an externality, like pollution. Something that benefits the individual at a cost to society. They posit the evolution of intelligence as an arms race between members of the species.
ADDED: The things we consider good generally require intelligence, if we suppose (as I expect) that consciousness requires intelligence. So it wouldn’t even make sense to conclude that intelligence is bad. Plus, intelligence itself might count as a good.
However, humans and human societies are currently near some evolutionary equilibrium. It’s very possible that individual intelligence has not evolved past its current levels because it is at an equilibrium, beyond which higher individual intelligence results in lower social utility. In fact, if you believe SIAI’s narrative about the danger of artificial intelligence and the difficulty of friendly AI, I think you would have to conclude that higher individual intelligence results in lower expected social utility, for human measures of utility.
...And data from humans and not small neural networks tends to suggest that intelligence is an externality, like vaccination. See links in http://lesswrong.com/lw/7e1/rationality_quotes_september_2011/4r01
(As one would expect from a richer model of economics including things like the difficulty of capturing all gains from innovation.)
That’s all very interesting; so I wonder, who are the free riders on national intelligence? The management class? Corporations? Masters of the dark triad?
All of us. We’re free-riding on the Pasteurs and Newtons and all the other engineers and scientists who have come before us.
True enough, but not what I was referring to.
ETA: I’m referring to the following (along with some of the other studies linked by gwern):
JoshuaZ is referring to something hugely more generic and obvious (revolutionary geniuses and cumulative knowledge vs significantly above average IQ workers in contemporary economies).
It’s true, but a weird use of “free-riding”. In that sense you could also say that every child is free-riding on its parents’ investment in raising it.
Strictly speaking, everyone who isn’t paying the innovator but is benefiting from the innovation is a free-rider. Who benefits disproportionately? I guess to the extent that all economic wealth flows to anyone disproportionately (perhaps the rich), they benefit disproportionately from innovation as well.
I don’t know if that’s serious or snark, but I don’t think it’s necessarily the rich. For instance, the management class could absorb the positive externalities of their more intelligent and less socially adept subordinates while only being marginally richer. So, as a whole if you could say the management class produces no to few benefits themselves, and yet takes a huge chunk of economic wealth roughly the size of what you’d expect from the positive externalities from intelligence, then you could potentially make some interesting conclusions.
ETA: I’m using this as an example of the kind of thing I’m talking about, and I’m not supporting it as a conclusion or even a hypothesis.
Woop, woop, group selection being presumed to win against individual selection alert.
No. Group selection being presumed to be a force in evolution that can counter individual selection.
This is not the place for a group selection debate. My well-informed opinion, expressed in other places on this site but deleted from the Wiki by Eliezer, is that the empirical evidence has always indicated group selection can occur, while the arguments against it have always been based on abstract mathematical models, all of which are now known to be fatally flawed in multiple ways.
I think there’s plenty of evidence that human societies are not near some evolutionary equilibrium. Can you name a human society that has lasted longer than a few hundred years? A few thousand years?
On the biological side, is there any evidence that we have reached an equilibrium? (I’m asking genuinely)
The consensus among biologists seems to be that social utility has zero to very little impact on evolution. See http://en.wikipedia.org/wiki/Group_selection
Higher levels of human intelligence result in a lower expected social utility for some other species (we are better at hunting them). It does not result in lower expected social utility for humans as we are generally good to other humans. Higher levels of individual intelligence have brought us the great achievements of human kind with very few downsides. The concern with AGI is that it might treat humans as humans treat some other species.
If anything, the reason we don’t see a rapid rise of intelligence among human beings is that it does not provide much evolutionary benefit. In modern societies, people don’t die for being dumb (usually) and sexual selection doesn’t have much impact since most people only have child with a single partner.
Officially.
If intelligence correlates positively with social skills and popularity, smart males can spread their genes outside of their marriages. (Reading this, don’t imagine a nerd with IQ 190, but rather a jock with IQ 120. If he impregnates his average neighbor’s wife, he contributes to the global intelligence increase.)
On one hand, evolution appears to work in a punctuated manner, meaning that individual components of evolutionary systems are usually at equilibrium.
On the other hand, brain volume in our ancestors rose smoothly from 3 million years ago to the present.
On the other other hand, some Neanderthals had larger brains than modern humans.
You can’t simply assert that. It’s an empirical question. How have you tried to measure the downsides?
It seems so obvious to me that I didn’t bother… Here’s some empirical data: http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html . Anyways, if you really want to dispute the fact that we have progressed over the past few centuries, I believe the burden of proof rests on you.
I’d say the negative correlation between education and fertility has been established pretty firmly. As a simple demonstration: if you sort the information here by fertility rate in descending order, you’ll find that the countries with <2 children per woman are mostly first-world countries. There are more than a few countries in Europe, for instance, where immigration is the only thing keeping the population growth positive, and let’s not even get started on Japan. And it goes deeper than country-to-country comparisons; within a given country, the poor and less educated tend to have more children than the other guys. (China might be an exception to that, I’m not sure.) From what I know of population trends in recorded history, this has always been the case.
This doesn’t look good from an evolutionary point of view, if one is concerned with the long term instead of immediate x-risks and bioengineering etc. On the surface at least high education doesn’t seem to be an evolutionarily valid tactic. Whether this applies for raw, general intelligence… Dunno. But I wouldn’t be surprised if we’d reached an evolutionary equilibrium or a downswing.
I can’t find the quote now, but I distinctly remember reading that before recent times (20th century or so), the number of children surviving to reproductive age and lifetime expected reproductive value were much higher among the wealthy elite than the vast majority of the population. It was said there that wealthy women hired poor nursemaids to suckle their babies, enabling them to give birth every 12-18 months instead of every few years (after weaning) like the poor women did. And of course infant and general mortality was much higher among the poor, especially during epidemics.
Looking at it another way, world population multiplied during the last hundred years because average global wealth rose drastically. Poor means malthusian constraints on population size, so even if you have high birthrate, in the end most of them die without reproducing because the population growth rate is vastly below the birth rate.
What about the Flynn effect?
I also strongly doubt the claim that human intelligence has stopped increasing. I was just offering an alternative hypothesis in case that proposition were true. Also, OP was arguing that intelligence stopped increasing at an evolutionary level which the Flynn effect doesn’t seem to contradict (after a quick skim of the Wikipedia page).
This sentence doesn’t really make sense. Intelligence in itself is not a “cost imposed to a third party” (externality’s definition)… Perhaps you mean intelligence leads to more externalities?
Furthermore, this study is definitely flawed since it’s quite obvious that individual intelligence has done a great deal lot more good for society than bad. Is there even an argument about this?
The study itself isn’t modelling all aspects of society, just a very limited set of PD situations. That society has on the whole benefited from intelligence is due primarily to inventions and discoveries, which have no analog in PD, Maybe if one had a version where the more previous rounds of cooperation there have been the higher then payoff of cooperation in future rounds one might have something that approached that.
Saying that the study was flawed was indeed a bit strong. What I really meant is that OP’s conclusion was wrong (individual intelligence = bad for society).
That isn’t a flaw in the study. It would be a flaw in an interpretation of the study.
Your question isn’t well-defined, since most of the things we define as good require intelligence. But of course that also means my initial statement wasn’t well-defined. I’ll respond in the OP.
It sounds like nonsense to me. Chimps don’t have much of a society: society is bigger and better as a result of intelligence. Not that intelligence is the only difference—but still.
I don’t see how this follows at all. The fact that increasing the domination of nature (aka power) of entities that possess non-human values is potentially bad for possessors of human values doesn’t mean that possessors of human values shouldn’t try to become more powerful.
Using a historical example: The technology advantage of the Western powers was bad for Tokugawa-era Japanese values. That doesn’t imply that Tokugawa Japan should not have invested in technology research, even if Omega guaranteed safety from Western incursion. Deriving the conclusion that increased power was bad for local values requires research about the sociological effects of various technological changes.
I’m not making a general argument. SIAI makes a specific argument, that humans of present-day intelligence will inevitably construct an AI, and this AI will almost inevitably cause infinite negative utility by our values. If you believe that argument, then increasing intelligence decreases expected utility, QED.
Not QED—you just tripped over Simpson’s paradox. Higher intelligence could yield a higher chance of a positive AI outcome rather than a negative AI outcome.
This is an interesting point. But I think that a small lowering of human intelligence, say shifting the entire curve down by 20 points, would prevent us from ever developing AI. So at a point epsilon from where human intelligence is at now, an increase increases the risk from AI.
Hum. Well, it depends on our starting point, right? We’re at a point where it seems unlikely we’re too dumb to make any sort of AI at all, so we had better be on top of our game.
“Intelligence of what?” is an important question that you are eliding. Increasing AI intelligence when the AI doesn’t share our values (i.e. uFAI) decreases utility among those who share our values. That doesn’t say anything about increasing intelligence of entities that do share our values.
lifetime earnings top out around IQ130 IIRC.
A quick search doesn’t support this. That’s from the interesting post here, data from a longitudinal study starting in the 1930s.
It is worth noting that that paper shows that IQ never stops mattering but it does stop mattering as much after the 130s, and personality traits become much more important, which is also somewhat true for scientific achievement.
Based on that link, I was a little surprised that openness decreases income. Considering its correlation with crystallized knowledge, I would have expected no effect or a positive one.
If you broke it down by occupation, I’d guess the effect is coming from Openness driving people to careers that are paid less in cash and more in novelty or thinking.
thanks. +4SD earned 15-20% more. That’s quite a lot.
I’m not suggesting higher intelligence would be harmful to the individual with higher intelligence.