I don’t know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.
But that’s quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?
I don’t know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.
But that’s quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?