I did not create this theory from a particular narrative, I just looked for a mathematical model that fits certain special cases, which is a fine method in my book. But, if you wish, we can probably think of the −u0 term as an extra penalty for death and the age-dependent term as “being tired of life”.
I’m having trouble understanding what those constants actually mean WRT ethical decisions about creating and terminating lives, and especially in comparing lives (when is it better to destroy one life in order to create two different ones, and/or when is it better to reduce h for some time in a life in order to increase h in another (or to bring another into existence).
I’m not sure I understand the question. For any case you can do the math and see what the model says. I already gave some examples in which you can see what the constants do.
...Why isn’t that already included in h(t)?
Like I said in the text, the age-dependent penalty can be included in h(t) if you wish. Then we get a model in which there is no age-dependent penalty but there is still the death penalty (no pun intended). Looking from this angle, we get a repugnant conclusion with many very long-lived people who only barely prefer life to death. But, the separation of “effective happiness” into “quality of life” and “age-dependent penalty” paints a new picture of what such people look like. The reason they only barely prefer life to death is not because they are suffering so much. It is because they lived for very long and are sort of very sated with life.
At any point in time, “prefer to continue to live from this point” is equal to “happy to come into existence at this point”, right?
No. Many people have the opposite intuition, especially people whose life is actually bad.
I think I understand the desire for a death penalty (or an early-termination penalty). However, for me, it should be more an acknowledgement that death depresses h in the subject prior to death and in many others both prior to and following death, (and that a new life will take time to increase h around them) than a penalty inherent in the individual value.
And the “prefer to continue but wish not to have been created” case really seems like an error in intuition to me. Evolutionarily useful, but evolution has different goals than thinking individuals do. I understand the purpose, though, so thanks for the explanations!
I did not create this theory from a particular narrative, I just looked for a mathematical model that fits certain special cases, which is a fine method in my book. But, if you wish, we can probably think of the −u0 term as an extra penalty for death and the age-dependent term as “being tired of life”.
I’m not sure I understand the question. For any case you can do the math and see what the model says. I already gave some examples in which you can see what the constants do.
Like I said in the text, the age-dependent penalty can be included in h(t) if you wish. Then we get a model in which there is no age-dependent penalty but there is still the death penalty (no pun intended). Looking from this angle, we get a repugnant conclusion with many very long-lived people who only barely prefer life to death. But, the separation of “effective happiness” into “quality of life” and “age-dependent penalty” paints a new picture of what such people look like. The reason they only barely prefer life to death is not because they are suffering so much. It is because they lived for very long and are sort of very sated with life.
No. Many people have the opposite intuition, especially people whose life is actually bad.
I think I understand the desire for a death penalty (or an early-termination penalty). However, for me, it should be more an acknowledgement that death depresses h in the subject prior to death and in many others both prior to and following death, (and that a new life will take time to increase h around them) than a penalty inherent in the individual value.
And the “prefer to continue but wish not to have been created” case really seems like an error in intuition to me. Evolutionarily useful, but evolution has different goals than thinking individuals do. I understand the purpose, though, so thanks for the explanations!