Seems like some measure of evidence—maybe large, maybe tiny—that “We don’t know how to give AI values, just to make them imitate values” is false?
I’m not sure what view you are criticizing here, so maybe you don’t disagree with me, but anyhow: I would say we don’t know how to give AIs exactly the values we want them to have; instead we whack them with reinforcement from the outside and it results in values that are maybe somewhat close to what we wanted but mostly selected for producing behavior that looks good to us rather than being actually what we wanted.
I’m not sure what view you are criticizing here, so maybe you don’t disagree with me, but anyhow: I would say we don’t know how to give AIs exactly the values we want them to have; instead we whack them with reinforcement from the outside and it results in values that are maybe somewhat close to what we wanted but mostly selected for producing behavior that looks good to us rather than being actually what we wanted.