What makes me trust Pei Wang more than Luke is the common-sense statements like “to make AGI safe, to control their experience will probably be the main approach (which is what “education” is all about), but even that cannot guarantee safety.”...
This sort of “common sense” can be highly misleading! For example, here Wang is drawing parallels between a nascent AI and a human child to argue about nature vs nurture. But if we compare a human and a different social animal, we’ll see that most of the differences in their behavior are innate and the gap can’t be covered by any amount of “education”: e.g. humans can’t really become as altruistic and self-sacrificing as worker ants because they’ll still retain some self-preservation instinct, no matter how you brainwash them.
What makes Wang think that this sort of fixed attitude—which can be made more hard-wired than the instincts of biological organisms—cannot manifest itself in an AGI?
(I’m certain that a serious AI thinker, or just someone with good logic and clear thinking, could find a lot more holes in such “common sense” talk.)
What makes Wang think that this sort of fixed attitude—which can be made more hard-wired than the instincts of biological organisms—cannot manifest itself in an AGI?
Presumably the argument is something like:
You can’t build an AI that is intelligent from the moment you switch it on: you have to train it.
We know how to train intelligence into humans, its called education
An AI that lacked human-style instincts and learning abilities at switch-on wouldn’t be trainable by us, we just wouldn’t know how, so it would never reach intelligence.
This sort of “common sense” can be highly misleading! For example, here Wang is drawing parallels between a nascent AI and a human child to argue about nature vs nurture. But if we compare a human and a different social animal, we’ll see that most of the differences in their behavior are innate and the gap can’t be covered by any amount of “education”: e.g. humans can’t really become as altruistic and self-sacrificing as worker ants because they’ll still retain some self-preservation instinct, no matter how you brainwash them.
What makes Wang think that this sort of fixed attitude—which can be made more hard-wired than the instincts of biological organisms—cannot manifest itself in an AGI?
(I’m certain that a serious AI thinker, or just someone with good logic and clear thinking, could find a lot more holes in such “common sense” talk.)
Presumably the argument is something like:
You can’t build an AI that is intelligent from the moment you switch it on: you have to train it.
We know how to train intelligence into humans, its called education
An AI that lacked human-style instincts and learning abilities at switch-on wouldn’t be trainable by us, we just wouldn’t know how, so it would never reach intelligence.