These time-limited agents have an incentive to coordinate with future versions of themselves: You’ll make more money today, if past-you set up the conditions for a profitable trade yesterday.
So a sequence of time-limited agents could still develop instrumental power-seeking.
On topic of “Applicability of time constraints for AI safety”:
https://www.lesswrong.com/posts/itTLCFj5NCHhFbK2Q/are-limited-horizon-agents-a-good-heuristic-for-the-off?commentId=xZoL4awrBjD4Wtxkv
Thank you! The idea of inter-temporal coordination looks interesting