The issue you describe is one issue, but not the only one. We do know how to train an agent to do SOME things we like.
Not consistently in sufficiently complex and variable environment.
can we be a little or a lot off-target, and still have that be enough, because we captured some overlap between our and the agents values?
No, because it will hallucinate often enough to kill us during one of those hallucinations.
Not consistently in sufficiently complex and variable environment.
No, because it will hallucinate often enough to kill us during one of those hallucinations.