(Playing devil’s advocate) Once you’re dead, there’s no way you can feel good about sapient life existing. So if I toss a coin 1 second after your death and push the red button causing a nuclear apocalypse iff it comes up heads, you won’t be able to feel sorrow in that case. You can certainly be sad before you die about me throwing the coin (if you know I’ll do that), but once you’re dead, there’s just no way you could be happy or sad about anything.
The fact that I won’t be able to care about it once I am dead doesn’t mean that I don’t value it now. And I can value future-states from present-states, even if those future-states do not include my person. I don’t want future sapient life to be wiped out, and that is a statement about my current preferences, not my ‘after death’ preferences. (Which, as noted, do not exist.)
To me (look below, I managed to confuse myself), it appears like this position is an expression of failure to imagine death, or otherwise failing to understand that there’s still an expected value which can be calculated even before death, and actions can be taken to maximize that expected value of the future, which is desribed by “caring about the future”.
So what you’re saying is, one can’t get warm fuzzies of any kind from anything unexpected happening after one’s death, right? I agree with this. But consider expected fuzzies: Until one’s death it’s certainly possible to influence the world, changing its expected state, and get warm fuzzies from that expected value before one’s death.
If we’re talking utilons, not warm fuzzies, I wonder what it even means to “feel” utilons. My utility function is simply a mapping from the state of the world to the set of real numbers, and maximizing it means doing that action out of all possible actions that maximizes the expected value of that function. My utility function can be more or less arbitrary, it’s just saying which actions I’ll take given that I have a choice.
Saying I care about sapient beings conquering the galaxy after my demise is merely saying that I will, while I can, choose those actions that augment the chance of sapient beings conquering the galaxy, nothing else. While I can’t feel happy about accomplishing this after my death, it still makes sense to say that while I lived, I cared for this future in which I couldn’t participate, by any sensible meaning of the verb “to care”.
(playing devil’s advocate) But you’re dead by then! Does anything even matter if you can’t experience it anymore?
Now, I find myself in a peculiar situation: I fully understand and accept the argument I made in the parent to this post, but somehow, a feeling prevails that this line of reasoning is unacceptable. It probably stems from my instincts which scream at me that death is bad, and from my brain not being able to imagine its nonexistence from the inside view.
(Playing devil’s advocate) Once you’re dead, there’s no way you can feel good about sapient life existing. So if I toss a coin 1 second after your death and push the red button causing a nuclear apocalypse iff it comes up heads, you won’t be able to feel sorrow in that case. You can certainly be sad before you die about me throwing the coin (if you know I’ll do that), but once you’re dead, there’s just no way you could be happy or sad about anything.
The fact that I won’t be able to care about it once I am dead doesn’t mean that I don’t value it now. And I can value future-states from present-states, even if those future-states do not include my person. I don’t want future sapient life to be wiped out, and that is a statement about my current preferences, not my ‘after death’ preferences. (Which, as noted, do not exist.)
That’s /exactly/ the method of reasoning which inspired this post.
To me (look below, I managed to confuse myself), it appears like this position is an expression of failure to imagine death, or otherwise failing to understand that there’s still an expected value which can be calculated even before death, and actions can be taken to maximize that expected value of the future, which is desribed by “caring about the future”.
So what you’re saying is, one can’t get warm fuzzies of any kind from anything unexpected happening after one’s death, right? I agree with this. But consider expected fuzzies: Until one’s death it’s certainly possible to influence the world, changing its expected state, and get warm fuzzies from that expected value before one’s death.
If we’re talking utilons, not warm fuzzies, I wonder what it even means to “feel” utilons. My utility function is simply a mapping from the state of the world to the set of real numbers, and maximizing it means doing that action out of all possible actions that maximizes the expected value of that function. My utility function can be more or less arbitrary, it’s just saying which actions I’ll take given that I have a choice.
Saying I care about sapient beings conquering the galaxy after my demise is merely saying that I will, while I can, choose those actions that augment the chance of sapient beings conquering the galaxy, nothing else. While I can’t feel happy about accomplishing this after my death, it still makes sense to say that while I lived, I cared for this future in which I couldn’t participate, by any sensible meaning of the verb “to care”.
(playing devil’s advocate) But you’re dead by then! Does anything even matter if you can’t experience it anymore?
Now, I find myself in a peculiar situation: I fully understand and accept the argument I made in the parent to this post, but somehow, a feeling prevails that this line of reasoning is unacceptable. It probably stems from my instincts which scream at me that death is bad, and from my brain not being able to imagine its nonexistence from the inside view.