Consider the agent that wants to maximize amount of paperclips produced next week. Under the usual formalism, it has stable preferences. Under your proposed formalism, it has changing preferences—on Tuesday it no longer cares about amount of production on Monday. So it seems like this formalism loses information about stability. So I don’t see the point.
I think a counterexample to “you should not devote cognition to achieving things that have already happened” is being angry at someone who has revealed they’ve betrayed you, which might acause them to not have betrayed you.
x
Consider the agent that wants to maximize amount of paperclips produced next week. Under the usual formalism, it has stable preferences. Under your proposed formalism, it has changing preferences—on Tuesday it no longer cares about amount of production on Monday. So it seems like this formalism loses information about stability. So I don’t see the point.
x
I think a counterexample to “you should not devote cognition to achieving things that have already happened” is being angry at someone who has revealed they’ve betrayed you, which might acause them to not have betrayed you.