I don’t know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.
But that’s quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?
How could I care about things that happen after I die only as instrumental values so as to affect things that happen before I die?
I don’t know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.
But that’s quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?