[epistemic status: trying on a model, unsure if “belief” is an applicable word here]
I don’t think the counterargument works, but I also don’t think there’s an argument that works either. Death is neither bad nor good, it’s just a part of the world which has formed our ideas of identity and experience. Certainly, on the margin, I’d prefer to delay my own death, and more importantly my decay in body and mind that will likely precede such a death.
I don’t actually have strong intuitions of whether I’d prefer others’ immortality, over the somewhat dynamic system of death and replenishment we’ve always had so far. Details matter so much that I don’t think a far-mode general value can be applied. I don’t even have a systemic description of individual vs collective identity and utility that would inform such an intuition.
I can get behind any movement to defer or remove the effects of aging, but my value drive there is to increase total maximally-valuable experience-hours, not to privilege any existing individual over a potential one.
Even if you don’t agree with the counterargument, you can see that people use this construction all the time: “death is needed because some other bad thing would happen in the world without death”. They just put different names for the “other thing”: eternal boredom, overpopulation, stagnation as Musk recently said, lack of meaning of life etc. But it doesn’t address the badness of death per se.
Humans have strong preference against personal-death-today. Any reasonable preferece-extracting procedure will learn this. E.g. worst punishment is death penalty but not castration or tongue-cutting.
By induction, it could be shown that if a person doesn’t want his death today (and also doesn’t want to change his preferences about this), he will not want it tomorrow, and in any other day in the future. So death is personally bad.
If a person is altruistic, he would not want to act against preferences of other people. So he would not want other people’s death, except some trolley-problem-like situations. Thus fighting death is universal altruistic goal.
I can get behind any movement to defer or remove the effects of aging, but my value drive there is to increase total maximally-valuable experience-hours, not to privilege any existing individual over a potential one.
It means that you are not supporter of preferential utilitarianism. But real people would likely resist implementing pure hedonist utilitarianism where individual life doesn’t matter, and someone could be killed because he consumes too much resources which otherwise could be used to support several less resource-consuming possible people. This will result in a war and a lot of sufferings. Thus from the consequentialists point of view hedonic utilitarianism will produce less hedons than other moral theories.
Yup, most people don’t think clearly. Most public arguments are bad. It’s not unreasonable to transform the common argument into “other people should die because I (and my in-group) will have a better future without them”.
This is a VERY hard thing to argue against, as there’s some truth in it. Even if you extend it to “and I accept that I will die in the future, as part of this equilibrium”, it’s pretty strong, as we have an actual working example, unlike any counter-proprosal. You can point out that it’s not utilitarian (as it values different individuals differently). To me, that’s a truth-derived feature, not a bug—people are not, in fact, equal.
Not all humans have that preference at all times in their life—I’ve known a few who chose to die (including some who I understood and supported the choice), and MANY who didn’t choose to suffer more in order to live longer. Your induction is invalid.
Many people have preferences and revealed preferences that I’m willing to ignore or actively prevent. I like having laws against assault, for instance. So pure preference adhesion is not a sufficient definition of altruism to apply here.
Also, I’m only altruistic on some topics, and not perfectly so. I like many people, and I like people as a concept, and all else equal, I prefer that even strangers be happy, all else equal. But all else is NEVER equal, and I honor some other-peoples-preferences far less than others, and I care more about people closer to me than further.
I suspect that attempts to generalize on these topics are doomed to fail. Most people don’t have consistent utility functions, let alone values. And there’s LOTS of behavioral data showing that a very large number of people don’t care very much about distant strangers.
Not all humans have that preference at all times in their life—I’ve known a few who chose to die (including some who I understood and supported the choice), and MANY who didn’t choose to suffer more in order to live longer. Your induction is invalid.
My point is that if someone chose death – this doesn’t mean that he doesn’t have the preference “do not die”. It means that he has two preferences: “not die” and “not suffer”, and the suffering was so strong that he choose to dish “not die” preference and choose death to stop sufferings. However, if he would have other ways to cure sufferings, he would choose them instead.
I guess when we’re discussing “this preference is invalid, because we should change the situation”, I get a little lost at which preferences to honor and which to “fix”. At this point in the discussion, we should stop arguing about goodness or badness of death until we’ve solved the suffering (which almost everyone agrees is bad).
Obviously we want to honor both preferences, but we just don’t know how. However, it seems to me that solving suffering as qualia of pain is technologically simpler: just add some electrodes to a right brain center which will turn it off when pain is above acceptable threshold. Death is more complex problem, but the main difference is that death is irreversible.
From Personal utilitarian view, any amount of suffering could be compensated by a future eternal paradise.
suffering as qualia of pain is technologically simpler
I can’t tell if you’re serious here—it seems to negate your argument (because we haven’t actually done so or taken significant steps toward it) if this is an intentional strawman, but it’s pretty weak as a motte-and-bailey.
I didn’t say “pain”, I said “suffering”. This includes the anguish that one has degraded over time and is now a net drain on family/society. And the degradation itself, regardless of the emotional reaction. Once you’ve solved aging, then there can be a reasonable debate about the value of death. Until then, it’s simply more efficient for the old and infirm to die. Fortunately, we’re rich enough to support a lot of people well past their useful duration, and that feels good, but one wouldn’t want to increase the proportion of old to young by an order of magnitude (with today’s constraints).
Solve the underlying constraints, and the argument about death will dissolve, or will migrate to more concrete reasons for one way or the other.
[epistemic status: trying on a model, unsure if “belief” is an applicable word here]
I don’t think the counterargument works, but I also don’t think there’s an argument that works either. Death is neither bad nor good, it’s just a part of the world which has formed our ideas of identity and experience. Certainly, on the margin, I’d prefer to delay my own death, and more importantly my decay in body and mind that will likely precede such a death.
I don’t actually have strong intuitions of whether I’d prefer others’ immortality, over the somewhat dynamic system of death and replenishment we’ve always had so far. Details matter so much that I don’t think a far-mode general value can be applied. I don’t even have a systemic description of individual vs collective identity and utility that would inform such an intuition.
I can get behind any movement to defer or remove the effects of aging, but my value drive there is to increase total maximally-valuable experience-hours, not to privilege any existing individual over a potential one.
Even if you don’t agree with the counterargument, you can see that people use this construction all the time: “death is needed because some other bad thing would happen in the world without death”. They just put different names for the “other thing”: eternal boredom, overpopulation, stagnation as Musk recently said, lack of meaning of life etc. But it doesn’t address the badness of death per se.
Humans have strong preference against personal-death-today. Any reasonable preferece-extracting procedure will learn this. E.g. worst punishment is death penalty but not castration or tongue-cutting.
By induction, it could be shown that if a person doesn’t want his death today (and also doesn’t want to change his preferences about this), he will not want it tomorrow, and in any other day in the future. So death is personally bad.
If a person is altruistic, he would not want to act against preferences of other people. So he would not want other people’s death, except some trolley-problem-like situations. Thus fighting death is universal altruistic goal.
It means that you are not supporter of preferential utilitarianism. But real people would likely resist implementing pure hedonist utilitarianism where individual life doesn’t matter, and someone could be killed because he consumes too much resources which otherwise could be used to support several less resource-consuming possible people. This will result in a war and a lot of sufferings. Thus from the consequentialists point of view hedonic utilitarianism will produce less hedons than other moral theories.
Yup, most people don’t think clearly. Most public arguments are bad. It’s not unreasonable to transform the common argument into “other people should die because I (and my in-group) will have a better future without them”.
This is a VERY hard thing to argue against, as there’s some truth in it. Even if you extend it to “and I accept that I will die in the future, as part of this equilibrium”, it’s pretty strong, as we have an actual working example, unlike any counter-proprosal. You can point out that it’s not utilitarian (as it values different individuals differently). To me, that’s a truth-derived feature, not a bug—people are not, in fact, equal.
Not all humans have that preference at all times in their life—I’ve known a few who chose to die (including some who I understood and supported the choice), and MANY who didn’t choose to suffer more in order to live longer. Your induction is invalid.
Many people have preferences and revealed preferences that I’m willing to ignore or actively prevent. I like having laws against assault, for instance. So pure preference adhesion is not a sufficient definition of altruism to apply here.
Also, I’m only altruistic on some topics, and not perfectly so. I like many people, and I like people as a concept, and all else equal, I prefer that even strangers be happy, all else equal. But all else is NEVER equal, and I honor some other-peoples-preferences far less than others, and I care more about people closer to me than further.
I suspect that attempts to generalize on these topics are doomed to fail. Most people don’t have consistent utility functions, let alone values. And there’s LOTS of behavioral data showing that a very large number of people don’t care very much about distant strangers.
My point is that if someone chose death – this doesn’t mean that he doesn’t have the preference “do not die”. It means that he has two preferences: “not die” and “not suffer”, and the suffering was so strong that he choose to dish “not die” preference and choose death to stop sufferings. However, if he would have other ways to cure sufferings, he would choose them instead.
I guess when we’re discussing “this preference is invalid, because we should change the situation”, I get a little lost at which preferences to honor and which to “fix”. At this point in the discussion, we should stop arguing about goodness or badness of death until we’ve solved the suffering (which almost everyone agrees is bad).
Obviously we want to honor both preferences, but we just don’t know how. However, it seems to me that solving suffering as qualia of pain is technologically simpler: just add some electrodes to a right brain center which will turn it off when pain is above acceptable threshold. Death is more complex problem, but the main difference is that death is irreversible.
From Personal utilitarian view, any amount of suffering could be compensated by a future eternal paradise.
I can’t tell if you’re serious here—it seems to negate your argument (because we haven’t actually done so or taken significant steps toward it) if this is an intentional strawman, but it’s pretty weak as a motte-and-bailey.
I didn’t say “pain”, I said “suffering”. This includes the anguish that one has degraded over time and is now a net drain on family/society. And the degradation itself, regardless of the emotional reaction. Once you’ve solved aging, then there can be a reasonable debate about the value of death. Until then, it’s simply more efficient for the old and infirm to die. Fortunately, we’re rich enough to support a lot of people well past their useful duration, and that feels good, but one wouldn’t want to increase the proportion of old to young by an order of magnitude (with today’s constraints).
Solve the underlying constraints, and the argument about death will dissolve, or will migrate to more concrete reasons for one way or the other.