What makes Wang think that this sort of fixed attitude—which can be made more hard-wired than the instincts of biological organisms—cannot manifest itself in an AGI?
Presumably the argument is something like:
You can’t build an AI that is intelligent from the moment you switch it on: you have to train it.
We know how to train intelligence into humans, its called education
An AI that lacked human-style instincts and learning abilities at switch-on wouldn’t be trainable by us, we just wouldn’t know how, so it would never reach intelligence.
Presumably the argument is something like:
You can’t build an AI that is intelligent from the moment you switch it on: you have to train it.
We know how to train intelligence into humans, its called education
An AI that lacked human-style instincts and learning abilities at switch-on wouldn’t be trainable by us, we just wouldn’t know how, so it would never reach intelligence.