It’s very possible that individual intelligence has not evolved past its current levels because it is at an equilibrium, beyond which higher individual intelligence results in lower social utility. In fact, if you believe SIAI’s narrative about the danger of artificial intelligence and the difficulty of friendly AI, I think you would have to conclude that higher individual intelligence results in lower expected social utility, for human measures of utility.
I don’t see how this follows at all. The fact that increasing the domination of nature (aka power) of entities that possess non-human values is potentially bad for possessors of human values doesn’t mean that possessors of human values shouldn’t try to become more powerful.
Using a historical example: The technology advantage of the Western powers was bad for Tokugawa-era Japanese values. That doesn’t imply that Tokugawa Japan should not have invested in technology research, even if Omega guaranteed safety from Western incursion. Deriving the conclusion that increased power was bad for local values requires research about the sociological effects of various technological changes.
I’m not making a general argument. SIAI makes a specific argument, that humans of present-day intelligence will inevitably construct an AI, and this AI will almost inevitably cause infinite negative utility by our values. If you believe that argument, then increasing intelligence decreases expected utility, QED.
Not QED—you just tripped over Simpson’s paradox. Higher intelligence could yield a higher chance of a positive AI outcome rather than a negative AI outcome.
This is an interesting point. But I think that a small lowering of human intelligence, say shifting the entire curve down by 20 points, would prevent us from ever developing AI. So at a point epsilon from where human intelligence is at now, an increase increases the risk from AI.
Hum. Well, it depends on our starting point, right? We’re at a point where it seems unlikely we’re too dumb to make any sort of AI at all, so we had better be on top of our game.
“Intelligence of what?” is an important question that you are eliding. Increasing AI intelligence when the AI doesn’t share our values (i.e. uFAI) decreases utility among those who share our values. That doesn’t say anything about increasing intelligence of entities that do share our values.
I don’t see how this follows at all. The fact that increasing the domination of nature (aka power) of entities that possess non-human values is potentially bad for possessors of human values doesn’t mean that possessors of human values shouldn’t try to become more powerful.
Using a historical example: The technology advantage of the Western powers was bad for Tokugawa-era Japanese values. That doesn’t imply that Tokugawa Japan should not have invested in technology research, even if Omega guaranteed safety from Western incursion. Deriving the conclusion that increased power was bad for local values requires research about the sociological effects of various technological changes.
I’m not making a general argument. SIAI makes a specific argument, that humans of present-day intelligence will inevitably construct an AI, and this AI will almost inevitably cause infinite negative utility by our values. If you believe that argument, then increasing intelligence decreases expected utility, QED.
Not QED—you just tripped over Simpson’s paradox. Higher intelligence could yield a higher chance of a positive AI outcome rather than a negative AI outcome.
This is an interesting point. But I think that a small lowering of human intelligence, say shifting the entire curve down by 20 points, would prevent us from ever developing AI. So at a point epsilon from where human intelligence is at now, an increase increases the risk from AI.
Hum. Well, it depends on our starting point, right? We’re at a point where it seems unlikely we’re too dumb to make any sort of AI at all, so we had better be on top of our game.
“Intelligence of what?” is an important question that you are eliding. Increasing AI intelligence when the AI doesn’t share our values (i.e. uFAI) decreases utility among those who share our values. That doesn’t say anything about increasing intelligence of entities that do share our values.