Isn’t this just “Humans are adaptation-executors, not utility-maximizers”, but applied to AI to say that an AI using heuristics that successfully hit a target in environment X may not continue that target if the environment changes?
Isn’t this just “Humans are adaptation-executors, not utility-maximizers”, but applied to AI to say that an AI using heuristics that successfully hit a target in environment X may not continue that target if the environment changes?