Not QED—you just tripped over Simpson’s paradox. Higher intelligence could yield a higher chance of a positive AI outcome rather than a negative AI outcome.
This is an interesting point. But I think that a small lowering of human intelligence, say shifting the entire curve down by 20 points, would prevent us from ever developing AI. So at a point epsilon from where human intelligence is at now, an increase increases the risk from AI.
Hum. Well, it depends on our starting point, right? We’re at a point where it seems unlikely we’re too dumb to make any sort of AI at all, so we had better be on top of our game.
Not QED—you just tripped over Simpson’s paradox. Higher intelligence could yield a higher chance of a positive AI outcome rather than a negative AI outcome.
This is an interesting point. But I think that a small lowering of human intelligence, say shifting the entire curve down by 20 points, would prevent us from ever developing AI. So at a point epsilon from where human intelligence is at now, an increase increases the risk from AI.
Hum. Well, it depends on our starting point, right? We’re at a point where it seems unlikely we’re too dumb to make any sort of AI at all, so we had better be on top of our game.