entire temporal trajectory of the lightcone – but the argument above does not directly support that (unless we invoke the claim that humans will explicitly train AI systems to care about the entire temporal trajectory of the lightcone, which seems unclear.)
We’ll be explicitly training AI systems to care about e.g. the law, commonsense ethics, avoiding harming humans, obeying human commands, etc. all of which involve at least some non-temporally-bounded stuff.
We’ll be explicitly training AI systems to care about e.g. the law, commonsense ethics, avoiding harming humans, obeying human commands, etc. all of which involve at least some non-temporally-bounded stuff.