This is an interesting distinction. I think it could be strengthened, further teased out by considering some human vs. animal examples. I can’t echo-locate better than a bat or even understand the cognitive machinery a bat uses to echo-locate, but I can nevertheless outwit any number of bats. The human cognitive function we call “intelligence” may be more general than the cognitive function associated with echo-location, but it isn’t totally clear how general it is.
If humans use the same cognitive machinery to outwit others/make strategic decisions that they use for other tasks, then perhaps the capacity to outwit a human is indicative of fully general intelligence. If this is true, it seems likely that the AI will outperform, or at least have the capacity to outperform humans in all of those tasks. It would presumably still have to devote resources to learning a given task, and may not choose to spend resources in that, but in principle it would have the capacity to be a virtuoso didgeridoo player or what have you.
But it’s not totally clear to me that fully general intelligence is necessary to outwit a human. It’s not even totally clear to me that human exhibit fully general intelligence. If there are things we can’t learn, how would we know?
It’s also not necessary that the AI be able to outwit humans in order for it to pose an existential risk. An asteroid is not capable of outwitting me, but if it crashes into the planet, I’m still dead. If the AI is potent enough and/or fast enough it still has the potential to be extremely problematic. With that being said, a dumb AI that poses an existential risk is a systems design issue akin to putting the nuclear launch switch on the same wall plate as the garbage disposal.
This is an interesting distinction. I think it could be strengthened, further teased out by considering some human vs. animal examples. I can’t echo-locate better than a bat or even understand the cognitive machinery a bat uses to echo-locate, but I can nevertheless outwit any number of bats. The human cognitive function we call “intelligence” may be more general than the cognitive function associated with echo-location, but it isn’t totally clear how general it is.
If humans use the same cognitive machinery to outwit others/make strategic decisions that they use for other tasks, then perhaps the capacity to outwit a human is indicative of fully general intelligence. If this is true, it seems likely that the AI will outperform, or at least have the capacity to outperform humans in all of those tasks. It would presumably still have to devote resources to learning a given task, and may not choose to spend resources in that, but in principle it would have the capacity to be a virtuoso didgeridoo player or what have you.
But it’s not totally clear to me that fully general intelligence is necessary to outwit a human. It’s not even totally clear to me that human exhibit fully general intelligence. If there are things we can’t learn, how would we know?
It’s also not necessary that the AI be able to outwit humans in order for it to pose an existential risk. An asteroid is not capable of outwitting me, but if it crashes into the planet, I’m still dead. If the AI is potent enough and/or fast enough it still has the potential to be extremely problematic. With that being said, a dumb AI that poses an existential risk is a systems design issue akin to putting the nuclear launch switch on the same wall plate as the garbage disposal.