My intuition here is that even if you treat death as intrinsically bad, as lives get longer the fixed harm of death eventually gets outweighed by even a small decrease in marginal utility over a long time.
Sure, people today care more about living 10 more years than they care about the difference between living for 100 and 110 years. But once they’ve lived for 100 years, their preference for living another 10 years might still be just as strong
Interesting point. But what I mean by “taking preferences into account at all” is that your preferences about the future have some moral weight. If at 20 years old you thinks that the decade from 100 to 110 is less valuable than the decade from 20 to 30, but your 100-year self disagrees, I don’t know how much to weight your 20-year-old views but they need to have some weight in order for us to really be taking your preferences about your own life seriously—which then drags down the value we place on that decade.
Could this still asymptote to above the value of creating a new life? Probably with some settings of the variables, but seems unrealistic if we’re assuming logarithmic preferences, which seem like the most psychologically realistic (especially over very long time horizons—who would trade 10^7 guaranteed years for a 10% chance at 10^8 years?)
My intuition here is that even if you treat death as intrinsically bad, as lives get longer the fixed harm of death eventually gets outweighed by even a small decrease in marginal utility over a long time.
My intuition is that there is no such thing as a fixed harm of death, as if it were a bad experience like a bout of the flu, added to the scales of utility. The harm of death is precisely the loss of one’s future. The amount that one wants that future is the amount that one wants to not die.
You can value or disvalue whatever you like. But the only negative thing I see about my death is that I don’t get to live any more. That is what death is. I don’t understand the distinction you are intending between them.
Separate from that is the actual process by which it comes about, which is at best instant, but usually unpleasant, and sometimes dreadful.
I think this is an important point. What measures the subjective value of an event or state of affairs? If we assume that it is something like happiness, or time of being alive, we run into counterexamples and paradoxes like the repugnant conclusion.
A more plausible measure of subjective value seems to be based on what we want: Death is bad for someone precisely to the degree of how strongly they don’t want to die. Furthermore, death is bad for them because they don’t want to die. Death is not bad because death would make them live shorter, or because it would deprive them of future happiness. (Those may be influencing factors, but only and exactly to the degree they want to avoid to living shorter or being deprived of future happiness.)
If at 20 years old you thinks that the decade from 100 to 110 is less valuable than the decade from 20 to 30, but your 100-year self disagrees, I don’t know how much to weight your 20-year-old views but they need to have some weight in order for us to really be taking your preferences about your own life seriously—which then drags down the value we place on that decade.
Hm, maybe this follows if we’re just allowing each individual to have a fixed amount of preferences, and so as your life gets longer, each new person-moment’s preferences matter less because they’re somehow averaged out. But if your 100-year old self has changed a lot, and in fact have quite different preferences, then this seems kind of unfair to them. Maybe we should weigh their preferences just as much as we would weigh a 20-year old’s preferences, and accept that this means that longer-lived people get to have more preferences?
Here’s one possible take on how this could work:
I care selfishly/intuitively about my well-being in the near term.
So when I’m evaluating “do I want to live for another 10 years?”, I ask “would I enjoy living for another 10 years? do I have goals I want to accomplish in the next 10 years?”, etc.
But as I consider myself further and further away in time, I care less and less about myself in a selfish/intuitive way. Instead, I seem more and more like a stranger, who I care about in an impartial fashion.
So when I’m evaluating “do I want 1000-year old me to live for another 10 years”, I mostly ask “how good is it for the world for 1000-year old me to live for another 10 years? How much do 1000-year old me want to live for another 10 years?”, etc.
So when evaluating “should we extend my life to 1010 years, or create a new life”, maybe we ask...
young me, who says “eh, they seem similarly good from my perspective,”
1000-year old me, who (maybe) says “I’d like to live longer, please”
And thus we just indefinitely extend my life.
(Depending on your philosophy of identity, maybe this is actually a world where my young self has died, gradually, via changing into someone else.)
My intuition here is that even if you treat death as intrinsically bad, as lives get longer the fixed harm of death eventually gets outweighed by even a small decrease in marginal utility over a long time.
Interesting point. But what I mean by “taking preferences into account at all” is that your preferences about the future have some moral weight. If at 20 years old you thinks that the decade from 100 to 110 is less valuable than the decade from 20 to 30, but your 100-year self disagrees, I don’t know how much to weight your 20-year-old views but they need to have some weight in order for us to really be taking your preferences about your own life seriously—which then drags down the value we place on that decade.
Could this still asymptote to above the value of creating a new life? Probably with some settings of the variables, but seems unrealistic if we’re assuming logarithmic preferences, which seem like the most psychologically realistic (especially over very long time horizons—who would trade 10^7 guaranteed years for a 10% chance at 10^8 years?)
My intuition is that there is no such thing as a fixed harm of death, as if it were a bad experience like a bout of the flu, added to the scales of utility. The harm of death is precisely the loss of one’s future. The amount that one wants that future is the amount that one wants to not die.
Why am I not allowed to intrinsically disvalue dying, in a way that’s separate from the value I place on my future as a whole?
You can value or disvalue whatever you like. But the only negative thing I see about my death is that I don’t get to live any more. That is what death is. I don’t understand the distinction you are intending between them.
Separate from that is the actual process by which it comes about, which is at best instant, but usually unpleasant, and sometimes dreadful.
I think this is an important point. What measures the subjective value of an event or state of affairs? If we assume that it is something like happiness, or time of being alive, we run into counterexamples and paradoxes like the repugnant conclusion.
A more plausible measure of subjective value seems to be based on what we want: Death is bad for someone precisely to the degree of how strongly they don’t want to die. Furthermore, death is bad for them because they don’t want to die. Death is not bad because death would make them live shorter, or because it would deprive them of future happiness. (Those may be influencing factors, but only and exactly to the degree they want to avoid to living shorter or being deprived of future happiness.)
Hm, maybe this follows if we’re just allowing each individual to have a fixed amount of preferences, and so as your life gets longer, each new person-moment’s preferences matter less because they’re somehow averaged out. But if your 100-year old self has changed a lot, and in fact have quite different preferences, then this seems kind of unfair to them. Maybe we should weigh their preferences just as much as we would weigh a 20-year old’s preferences, and accept that this means that longer-lived people get to have more preferences?
Here’s one possible take on how this could work:
I care selfishly/intuitively about my well-being in the near term.
So when I’m evaluating “do I want to live for another 10 years?”, I ask “would I enjoy living for another 10 years? do I have goals I want to accomplish in the next 10 years?”, etc.
But as I consider myself further and further away in time, I care less and less about myself in a selfish/intuitive way. Instead, I seem more and more like a stranger, who I care about in an impartial fashion.
So when I’m evaluating “do I want 1000-year old me to live for another 10 years”, I mostly ask “how good is it for the world for 1000-year old me to live for another 10 years? How much do 1000-year old me want to live for another 10 years?”, etc.
So when evaluating “should we extend my life to 1010 years, or create a new life”, maybe we ask...
young me, who says “eh, they seem similarly good from my perspective,”
1000-year old me, who (maybe) says “I’d like to live longer, please”
And thus we just indefinitely extend my life.
(Depending on your philosophy of identity, maybe this is actually a world where my young self has died, gradually, via changing into someone else.)