We we talk about the arrival of “human-level” AI, we don’t really care if it is at a human level at various tasks that we humans work on. Rather, if we’re looking at AI risk, we care about AI that’s above human level in this sense: It can outwit us.
I can imagine some scenarios in which AI trounces humans badly ton its way to goal achievement, while lacking most human areas of intelligence. These could be adversarial AIs: An algotrading AI that takes over the world economy in a day or a military AI that defeats an enemy nation in an instant with some nasty hack.
It could even be an AI with some arbitrary goal—a paper-clipper—that brings into play a single fiendishly effective self-defense mechanism.
The point is that the range of abilities can be very narrow, and the AI can still be working for its master in the way that its master intended. As long as it can truly outwit humans, we’re in “superintelligence” territory for the purposes of our discussion.
This is an interesting distinction. I think it could be strengthened, further teased out by considering some human vs. animal examples. I can’t echo-locate better than a bat or even understand the cognitive machinery a bat uses to echo-locate, but I can nevertheless outwit any number of bats. The human cognitive function we call “intelligence” may be more general than the cognitive function associated with echo-location, but it isn’t totally clear how general it is.
If humans use the same cognitive machinery to outwit others/make strategic decisions that they use for other tasks, then perhaps the capacity to outwit a human is indicative of fully general intelligence. If this is true, it seems likely that the AI will outperform, or at least have the capacity to outperform humans in all of those tasks. It would presumably still have to devote resources to learning a given task, and may not choose to spend resources in that, but in principle it would have the capacity to be a virtuoso didgeridoo player or what have you.
But it’s not totally clear to me that fully general intelligence is necessary to outwit a human. It’s not even totally clear to me that human exhibit fully general intelligence. If there are things we can’t learn, how would we know?
It’s also not necessary that the AI be able to outwit humans in order for it to pose an existential risk. An asteroid is not capable of outwitting me, but if it crashes into the planet, I’m still dead. If the AI is potent enough and/or fast enough it still has the potential to be extremely problematic. With that being said, a dumb AI that poses an existential risk is a systems design issue akin to putting the nuclear launch switch on the same wall plate as the garbage disposal.
We we talk about the arrival of “human-level” AI, we don’t really care if it is at a human level at various tasks that we humans work on. Rather, if we’re looking at AI risk, we care about AI that’s above human level in this sense: It can outwit us.
I can imagine some scenarios in which AI trounces humans badly ton its way to goal achievement, while lacking most human areas of intelligence. These could be adversarial AIs: An algotrading AI that takes over the world economy in a day or a military AI that defeats an enemy nation in an instant with some nasty hack.
It could even be an AI with some arbitrary goal—a paper-clipper—that brings into play a single fiendishly effective self-defense mechanism.
The point is that the range of abilities can be very narrow, and the AI can still be working for its master in the way that its master intended. As long as it can truly outwit humans, we’re in “superintelligence” territory for the purposes of our discussion.
This is an interesting distinction. I think it could be strengthened, further teased out by considering some human vs. animal examples. I can’t echo-locate better than a bat or even understand the cognitive machinery a bat uses to echo-locate, but I can nevertheless outwit any number of bats. The human cognitive function we call “intelligence” may be more general than the cognitive function associated with echo-location, but it isn’t totally clear how general it is.
If humans use the same cognitive machinery to outwit others/make strategic decisions that they use for other tasks, then perhaps the capacity to outwit a human is indicative of fully general intelligence. If this is true, it seems likely that the AI will outperform, or at least have the capacity to outperform humans in all of those tasks. It would presumably still have to devote resources to learning a given task, and may not choose to spend resources in that, but in principle it would have the capacity to be a virtuoso didgeridoo player or what have you.
But it’s not totally clear to me that fully general intelligence is necessary to outwit a human. It’s not even totally clear to me that human exhibit fully general intelligence. If there are things we can’t learn, how would we know?
It’s also not necessary that the AI be able to outwit humans in order for it to pose an existential risk. An asteroid is not capable of outwitting me, but if it crashes into the planet, I’m still dead. If the AI is potent enough and/or fast enough it still has the potential to be extremely problematic. With that being said, a dumb AI that poses an existential risk is a systems design issue akin to putting the nuclear launch switch on the same wall plate as the garbage disposal.