Yup, most people don’t think clearly. Most public arguments are bad. It’s not unreasonable to transform the common argument into “other people should die because I (and my in-group) will have a better future without them”.
This is a VERY hard thing to argue against, as there’s some truth in it. Even if you extend it to “and I accept that I will die in the future, as part of this equilibrium”, it’s pretty strong, as we have an actual working example, unlike any counter-proprosal. You can point out that it’s not utilitarian (as it values different individuals differently). To me, that’s a truth-derived feature, not a bug—people are not, in fact, equal.
Not all humans have that preference at all times in their life—I’ve known a few who chose to die (including some who I understood and supported the choice), and MANY who didn’t choose to suffer more in order to live longer. Your induction is invalid.
Many people have preferences and revealed preferences that I’m willing to ignore or actively prevent. I like having laws against assault, for instance. So pure preference adhesion is not a sufficient definition of altruism to apply here.
Also, I’m only altruistic on some topics, and not perfectly so. I like many people, and I like people as a concept, and all else equal, I prefer that even strangers be happy, all else equal. But all else is NEVER equal, and I honor some other-peoples-preferences far less than others, and I care more about people closer to me than further.
I suspect that attempts to generalize on these topics are doomed to fail. Most people don’t have consistent utility functions, let alone values. And there’s LOTS of behavioral data showing that a very large number of people don’t care very much about distant strangers.
Not all humans have that preference at all times in their life—I’ve known a few who chose to die (including some who I understood and supported the choice), and MANY who didn’t choose to suffer more in order to live longer. Your induction is invalid.
My point is that if someone chose death – this doesn’t mean that he doesn’t have the preference “do not die”. It means that he has two preferences: “not die” and “not suffer”, and the suffering was so strong that he choose to dish “not die” preference and choose death to stop sufferings. However, if he would have other ways to cure sufferings, he would choose them instead.
I guess when we’re discussing “this preference is invalid, because we should change the situation”, I get a little lost at which preferences to honor and which to “fix”. At this point in the discussion, we should stop arguing about goodness or badness of death until we’ve solved the suffering (which almost everyone agrees is bad).
Obviously we want to honor both preferences, but we just don’t know how. However, it seems to me that solving suffering as qualia of pain is technologically simpler: just add some electrodes to a right brain center which will turn it off when pain is above acceptable threshold. Death is more complex problem, but the main difference is that death is irreversible.
From Personal utilitarian view, any amount of suffering could be compensated by a future eternal paradise.
suffering as qualia of pain is technologically simpler
I can’t tell if you’re serious here—it seems to negate your argument (because we haven’t actually done so or taken significant steps toward it) if this is an intentional strawman, but it’s pretty weak as a motte-and-bailey.
I didn’t say “pain”, I said “suffering”. This includes the anguish that one has degraded over time and is now a net drain on family/society. And the degradation itself, regardless of the emotional reaction. Once you’ve solved aging, then there can be a reasonable debate about the value of death. Until then, it’s simply more efficient for the old and infirm to die. Fortunately, we’re rich enough to support a lot of people well past their useful duration, and that feels good, but one wouldn’t want to increase the proportion of old to young by an order of magnitude (with today’s constraints).
Solve the underlying constraints, and the argument about death will dissolve, or will migrate to more concrete reasons for one way or the other.
Yup, most people don’t think clearly. Most public arguments are bad. It’s not unreasonable to transform the common argument into “other people should die because I (and my in-group) will have a better future without them”.
This is a VERY hard thing to argue against, as there’s some truth in it. Even if you extend it to “and I accept that I will die in the future, as part of this equilibrium”, it’s pretty strong, as we have an actual working example, unlike any counter-proprosal. You can point out that it’s not utilitarian (as it values different individuals differently). To me, that’s a truth-derived feature, not a bug—people are not, in fact, equal.
Not all humans have that preference at all times in their life—I’ve known a few who chose to die (including some who I understood and supported the choice), and MANY who didn’t choose to suffer more in order to live longer. Your induction is invalid.
Many people have preferences and revealed preferences that I’m willing to ignore or actively prevent. I like having laws against assault, for instance. So pure preference adhesion is not a sufficient definition of altruism to apply here.
Also, I’m only altruistic on some topics, and not perfectly so. I like many people, and I like people as a concept, and all else equal, I prefer that even strangers be happy, all else equal. But all else is NEVER equal, and I honor some other-peoples-preferences far less than others, and I care more about people closer to me than further.
I suspect that attempts to generalize on these topics are doomed to fail. Most people don’t have consistent utility functions, let alone values. And there’s LOTS of behavioral data showing that a very large number of people don’t care very much about distant strangers.
My point is that if someone chose death – this doesn’t mean that he doesn’t have the preference “do not die”. It means that he has two preferences: “not die” and “not suffer”, and the suffering was so strong that he choose to dish “not die” preference and choose death to stop sufferings. However, if he would have other ways to cure sufferings, he would choose them instead.
I guess when we’re discussing “this preference is invalid, because we should change the situation”, I get a little lost at which preferences to honor and which to “fix”. At this point in the discussion, we should stop arguing about goodness or badness of death until we’ve solved the suffering (which almost everyone agrees is bad).
Obviously we want to honor both preferences, but we just don’t know how. However, it seems to me that solving suffering as qualia of pain is technologically simpler: just add some electrodes to a right brain center which will turn it off when pain is above acceptable threshold. Death is more complex problem, but the main difference is that death is irreversible.
From Personal utilitarian view, any amount of suffering could be compensated by a future eternal paradise.
I can’t tell if you’re serious here—it seems to negate your argument (because we haven’t actually done so or taken significant steps toward it) if this is an intentional strawman, but it’s pretty weak as a motte-and-bailey.
I didn’t say “pain”, I said “suffering”. This includes the anguish that one has degraded over time and is now a net drain on family/society. And the degradation itself, regardless of the emotional reaction. Once you’ve solved aging, then there can be a reasonable debate about the value of death. Until then, it’s simply more efficient for the old and infirm to die. Fortunately, we’re rich enough to support a lot of people well past their useful duration, and that feels good, but one wouldn’t want to increase the proportion of old to young by an order of magnitude (with today’s constraints).
Solve the underlying constraints, and the argument about death will dissolve, or will migrate to more concrete reasons for one way or the other.