Agent 1 will have trouble modeling how its decision to change its utility function now will influence its own decisions later, as described in AIXI and existential despair.
Be warned that that post made practically no sense—and surely isn’t a good reference.
Be warned that that post made practically no sense—and surely isn’t a good reference.