Part of the difficulty here is that opposing death doesn’t necessarily equate to supporting life-extension research, as it does depend somewhat on the knock-on effects and the implementation of the latter.
For example, it strikes me as plausible that a life-extension treatment expensive or scarce enough that it could be applied to only N% of the population would leave the world in a worse condition than it is now, and I’m not at all confident of my estimates of N.
That said, my usual reply to the pro-death argument is some form of “If a rogue scientist accidentally released a nanovirus that kept everyone alive and healthy for a thousand years, would you support a policy of artificial death to maintain the status quo? If not, why not?”
My experience is that very few people treat an inevitable death the same way as a deliberate one, no matter how much they assert that the death is a good thing for reasons other than its inevitability. So they often end up thinking about the second case very differently, and sometimes that evokes a mental set that carries over.
That said, my usual reply to the pro-death argument is some form of “If a rogue scientist accidentally released a nanovirus that kept everyone alive and healthy for a thousand years, would you support a policy of artificial death to maintain the status quo? If not, why not?”
I use the baseball-bat-to-the-head analogy, that if people were hit on the head with a baseball bat twice daily, after years they would come accept it, after decades they would come to believe it is good, and after generations they would develop clever and complicated arguments as to why it is good and why it should continue. But would any of those arguments convince you, now, to take up a regimen of baseball bat strikes to the head?
I think I stole that almost word-for-word from somewhere else on this site, though.
The original was Eliezer himself, in How to Seem (and Be) Deep. I’m more fond of TheOtherDave’s analogy, though, since I think the baseball bat analogy suffers from one weakness: you’re drawing a metaphorical parallel in which death (which you disagree is bad) is replaced by something that’s definitely bad. Sometimes you can’t get any farther than this, since this sets off some people’s BS detectors (and to be honest I think the heuristic they’re using to call foul on this is a decent one).
Even if you can get them to consider the true payload of the argument (that clearly bad but inevitable things will probably be rationalized, and therefore that we should expect death to have some positive-sounding rationalizations even if it were A Very Bad Thing), you still haven’t really got a complete argument. That baseless rationalizations might be expected to crop up justifying inevitabilities does not prove that your conversation partner’s justifications are baseless, it only provides an alternate explanation for the evidence.
It isn’t actually hard to flesh this line of thought into a more compelling argument, but I think the accidental long-life thought experiment hits much harder.
Edit: Upon rereading, I had forgotten that Eliezer’s version ends with a line that includes the thrust of the TheOtherDave’s argument: “I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no.”
(nods) Agreed; I don’t think I was saying anything Eliezer wasn’t, just building a slightly different intuition pump.
That said, the precise construction of the intuition pump can matter a lot for rhetorical purposes.
Mainstream culture entangles two separate ideas when it comes to death: first, that an agent’s choices are more subject to skepticism than the consistently applied ground rules of existence (A1) and second, that death is better than life (A2).
A1 is a lot easier to support than A2, so in any scenario where life-extension is an agent’s choice the arguments against life-extension will tend to rest heavily on A1.
Setting up a scenario where A1 and A2 point in different directions—where life-extension just happens, and death is a deliberate choice—kicks that particular leg out from under the argument, and forces people to actually defend A2. (Which, to be fair, some people will proceed to do… but others will balk. And there are A3..An’s that I’m ignoring here.)
The “I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no.” argument does something similar: it also makes life the default, and death a choice.
In some ways it’s an even better pump: my version still has an agent responsible for life-extension, even if it’s accidental. OTOH, in some ways it’s worse: telling a story about how the immortal person got that way makes the narrative easier to swallow.
(Incidentally, this suggests that a revision involving a large-scale mutation rather than a rogue scientist might work even better, though the connotations of “mutation” impose their own difficulties.)
It might just be easiest to postulate an immortal person and obfuscate the process entirely.
Also, I am trying to come up with a quick test to distinguish passive deathists from active deathists—ie, who would refuse an offered immortality potion, and who would vote against funding to develop an immortality potion? Who would say “I don’t want to live forever” and who would say “People shouldn’t live forever”? Arguments need to be tailored in different ways for these different types. Something like “How about you take the potion, and then if you actually do wake up one day and want to die, you can commit painless suicide?” for the passives and your “Would you vote for a policy of artificial death?” for the actives.
Part of the difficulty here is that opposing death doesn’t necessarily equate to supporting life-extension research, as it does depend somewhat on the knock-on effects and the implementation of the latter.
For example, it strikes me as plausible that a life-extension treatment expensive or scarce enough that it could be applied to only N% of the population would leave the world in a worse condition than it is now, and I’m not at all confident of my estimates of N.
That said, my usual reply to the pro-death argument is some form of “If a rogue scientist accidentally released a nanovirus that kept everyone alive and healthy for a thousand years, would you support a policy of artificial death to maintain the status quo? If not, why not?”
My experience is that very few people treat an inevitable death the same way as a deliberate one, no matter how much they assert that the death is a good thing for reasons other than its inevitability. So they often end up thinking about the second case very differently, and sometimes that evokes a mental set that carries over.
And sometimes not.
I use the baseball-bat-to-the-head analogy, that if people were hit on the head with a baseball bat twice daily, after years they would come accept it, after decades they would come to believe it is good, and after generations they would develop clever and complicated arguments as to why it is good and why it should continue. But would any of those arguments convince you, now, to take up a regimen of baseball bat strikes to the head?
I think I stole that almost word-for-word from somewhere else on this site, though.
The original was Eliezer himself, in How to Seem (and Be) Deep. I’m more fond of TheOtherDave’s analogy, though, since I think the baseball bat analogy suffers from one weakness: you’re drawing a metaphorical parallel in which death (which you disagree is bad) is replaced by something that’s definitely bad. Sometimes you can’t get any farther than this, since this sets off some people’s BS detectors (and to be honest I think the heuristic they’re using to call foul on this is a decent one).
Even if you can get them to consider the true payload of the argument (that clearly bad but inevitable things will probably be rationalized, and therefore that we should expect death to have some positive-sounding rationalizations even if it were A Very Bad Thing), you still haven’t really got a complete argument. That baseless rationalizations might be expected to crop up justifying inevitabilities does not prove that your conversation partner’s justifications are baseless, it only provides an alternate explanation for the evidence.
It isn’t actually hard to flesh this line of thought into a more compelling argument, but I think the accidental long-life thought experiment hits much harder.
Edit: Upon rereading, I had forgotten that Eliezer’s version ends with a line that includes the thrust of the TheOtherDave’s argument: “I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no.”
(nods) Agreed; I don’t think I was saying anything Eliezer wasn’t, just building a slightly different intuition pump.
That said, the precise construction of the intuition pump can matter a lot for rhetorical purposes.
Mainstream culture entangles two separate ideas when it comes to death: first, that an agent’s choices are more subject to skepticism than the consistently applied ground rules of existence (A1) and second, that death is better than life (A2).
A1 is a lot easier to support than A2, so in any scenario where life-extension is an agent’s choice the arguments against life-extension will tend to rest heavily on A1.
Setting up a scenario where A1 and A2 point in different directions—where life-extension just happens, and death is a deliberate choice—kicks that particular leg out from under the argument, and forces people to actually defend A2. (Which, to be fair, some people will proceed to do… but others will balk. And there are A3..An’s that I’m ignoring here.)
The “I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no.” argument does something similar: it also makes life the default, and death a choice.
In some ways it’s an even better pump: my version still has an agent responsible for life-extension, even if it’s accidental. OTOH, in some ways it’s worse: telling a story about how the immortal person got that way makes the narrative easier to swallow.
(Incidentally, this suggests that a revision involving a large-scale mutation rather than a rogue scientist might work even better, though the connotations of “mutation” impose their own difficulties.)
It might just be easiest to postulate an immortal person and obfuscate the process entirely.
Also, I am trying to come up with a quick test to distinguish passive deathists from active deathists—ie, who would refuse an offered immortality potion, and who would vote against funding to develop an immortality potion? Who would say “I don’t want to live forever” and who would say “People shouldn’t live forever”? Arguments need to be tailored in different ways for these different types. Something like “How about you take the potion, and then if you actually do wake up one day and want to die, you can commit painless suicide?” for the passives and your “Would you vote for a policy of artificial death?” for the actives.