An aligned AI should not care about the future directly, only via how humans care about the future. I see this as necessary in order to prevent the AI, once powerful enough, from replacing/​reprogramming humans with utility monsters.
Prerequisite: use a utility function that applies to actions, not world-states.
An aligned AI should not care about the future directly, only via how humans care about the future. I see this as necessary in order to prevent the AI, once powerful enough, from replacing/​reprogramming humans with utility monsters.
Prerequisite: use a utility function that applies to actions, not world-states.