As something perhaps related to this… is it possible LW became dogmatic and stubborn over time and it generated the sort of place that wasn’t that interesting to follow because nearly everything had already been said?
I’d like to believe I came to accept a lot of the LW common views as I dug into the sequences and realized many misconceptions I held about reality. (Perhaps there is some bias I’m unaware of that is causing me to believe I’m less biased than I am?) But I’ve noticed EY among others here who agree with him seemed to just dig in to their views more deeply as time went by.
A couple examples:
Conjunction Fallacy and the Linda Problem—It just isn’t difficult at all to see why this is not as much a case of people’s weakness to comprehend formal mathematical probablities as it socially functioning adults’ desire to engage in non-awkward conversations. If I remember right, EY wrote some long post that tl;dr (paraphrase) said the conjunction fallacy must exist because it’s been studied a lot.
“Lifeism” on LW—It’s weird to me that some folks (who are very familiar with typical mind fallacy) cannot accept the fact that some people wouldn’t want to live forever. The failure to have the option to exist indefinitely just isn’t that big a deal to some people, and this is—it seems to me—the sort of thing that many on LW seem(ed) intent to prove was mathematically in error.
Dust Specks v. Torture—Kooky—no matter how often I’m told to shut up and multiply.
Anyway, the one reason I liked LW is it was really smart people who were willing to change their views based on evidence and the pursuit of reality. I don’t claim to have done some sort of exhaustive study on the all the material here (nor could I since a good-sized chunk of it is above my head), but I think it suffers from all the same sorts of problems and biases typical internet community hiveminds do. And maybe that just got old and annoying to people?
Conjunction Fallacy and the Linda Problem—It just isn’t difficult at all to see why this is not as much a case of people’s weakness to comprehend formal mathematical probablities as it socially functioning adults’ desire to engage in non-awkward conversations. If I remember right, EY wrote some long post that tl;dr (paraphrase) said the conjunction fallacy must exist because it’s been studied a lot.
I do believe that people have done experiments specifically to test this interpretation, and found that the Conjunction Fallacy does actually exist, in basically the way the Linda Problem suggests it does. That is, it’s not just “but we repeated the experiment and are sure there’s the effect we measured” but “we considered alternative explanations, and did experiments to confirm or disconfirm those explanations, and they’re disconfirmed.”
I think you are right. And I think the conjunction fallacy as a weakness to intuit probability is real. (The Monty Hall problem vexes my intuition once about every 18 months)
But I think it was vastly overstated and does not apply to “real life” situations in nearly the same way as in testing environments.
If someone approaches me and says “John is super athletic and 7.5 feet tall. Which is probably true? John is a bank teller? Or John is a bank teller and played NBA basketball...”
I’ll think they’re probably telling me about John, in part, because he has acheived something noteworthy—like playing NBA basketball.
I won’t hardly give one damn about if I’m right about this random person’s probability quiz regarding John. I’ll just be polite and give it a quick guess.
It might occur to me the person is mistaken… and that this person’s mistakeness is a higher probability a 7.5 foot super athletic guy turned down big cash and fame to be a bank teller.
At any rate, to my recall, some LWers just seemed to be out of touch with the real world on this one.
But I think it was vastly overstated and does not apply to “real life” situations in nearly the same way as in testing environments.
It’s certainly easier to demonstrate in testing environments. But I think the mistake of using ‘representativeness’ to judge probability does come up quite a bit in real life situations.
I’ll think they’re probably telling me about John, in part, because he has acheived something noteworthy—like playing NBA basketball.
But… it’s still a conjunction! You shouldn’t think John becomes more likely when another constraint is put on it. You might ask “did the first John never play in the NBA, or does that cover both cases?”
Typically, human minds are set up to deal with stories, not math, and using stories when math is appropriate is a way to leave yourself easily hackable. (Mentioning the NBA should not make you think there are more bank tellers, or that bank tellers are more athletic!)
Your reply is a good example—not to pick on you—of what I’m talking about.
Of course it’s “still a conjunction”. Of course the formal probability is lower in the case of the conjunction regardless of if John is 10 feet tall and can fly. But in the real world good instrumental rationality involves the audicity to come to the conclusion that John is a NBA basketball player despite the clues in the question. The answer might be the questioner is wrong about John, and that isn’t a valid option in the lab.
Your reply is a good example—not to pick on you—of what I’m talking about.
I’m pretty confident that I understand your position, and to me it looks like you’re falling exactly into the trap predicted by the fallacy. Would it be a good use of our time for me to explain why? (And, I suppose, areas where I think the fallacy is dangerous?)
You shouldn’t think John becomes more likely when another constraint is put on it. You might ask “did the first John never play in the NBA, or does that cover both cases?”
No. If you did reply with this to someone who approached you in a social situation, you’d be more likely to “lose” than if you were just polite and answered the question with your best guess.
It is socially awkward to do labwork in real world social environments. So, while your follow up questions might help you win in correctly identifying the highest probability for John’s career path, you’d lose in the social exchange because you would have acted like a weirdo.
It’s good to be aware of the conjunction fallacy. It’s good to be aware of lots of the stuff on LW. But when you go around using it to mercilessly pursue rationality with no regard for decorum, you end up doing poorly in real life.
The real heart of the conjunction fallacy is mistaking P(A|B) and P(B|A). Since those look very similar, let’s try to make them more distinct: P(description|attribute) and P(attribute|description), or representativeness and likeliness*.
When you hear “NBA player,” the representativeness for ‘tall and athletic’ skyrockets. If he was an NBA player, it’s almost certain that he’s tall and athletic. But the reverse inference- how much knowing that he’s tall and athletic increases the chance that he’s an NBA player- is much lower. And while the bank teller detail is strange, you probably aren’t likely to adjust the representativeness down much because of it, even though there are probably more former NBA players who are short or got fat after leaving the league than there are former NBA players that became bank tellers. (That is, you should pay as much attention to 1% probabilities as you should to 99% probabilities when doing Bayesian calculations, because both represent similar strengths of evidence.)
When details increase, the likeliness of a story has to not increase, assuming you’re logically omniscient, which is obviously a bad assumption. If I say that I’m wearing green, and then that I’m wearing blue, it’s more likely that I’m wearing just green than wearing green and blue, because any case in which I am wearing both I am wearing green. This is the core idea of burdensome details.
So lets talk examples. When an insurance salesman comes to your door, which question will he ask: “what’s the chance that you’ll die tomorrow and leave your loved ones without anyone to care for them?” or “what’s the chance that you’ll die tomorrow of a heart attack and leave your loved ones without anyone to care for them?” The second question tells a story- and if your estimate of dying is higher because they specified the cause of death (which necessarily leaves out other potential causes!), then by telling you a long list of potential causes, as well as many vivid details about the scenario, the salesman can get your perceived risk as high as he needs it to be to justify the insurance.
Now, you may make the omniscience counterargument from before- who is to say that your baseline is any good? Maybe you thought the risk was zero, but on second thought it’s actually nonzero. But I would argue that the way to fix a fault is by doing the right thing, not a different wrong thing. You say “Wow, that is scary. But what’s the actual risk, in numeric terms?”, because if you don’t trust yourself to estimate what your total risk of death is, then you probably shouldn’t trust yourself to estimate your partial risk of death.
*I use infrequently used terms to try to make it clear that I am referring to precisely defined mathematical entities.
But when you go around using it to mercilessly pursue rationality with no regard for decorum, you end up doing poorly in real life.
Agreed that it’s a good idea to be polite. Disagreed that the conjunction fallacy is just because people are polite. There are lots of experiments where people are just getting the formal math problem wrong or being primed into giving strange estimates.
But even if we suppose that the person is trying to ‘steelman the question,’ that is a dangerous thing to do in real life. “Did you get the tickets for Saturday?” She must mean Friday, because that’s when we’re going. “Yes, I got the tickets.” Friday: “I’m outside the theater, where are you?” “At work; we’re going tomorrow! …you got the tickets for tomorrow, right? Because now the show is sold out.”
you’d lose in the social exchange because you would have acted like a weirdo.
Yes, it’s a good social skill to judge the level of precision the other person wants in the conversation. Responding to an unimportant anecdote with a “well actually” is generally seen as a jerk move. But if you’re around people who see it as a jerk move to insist on precision when something meaningful actually depends on that precision, then you need to replace those people.
And if they were intentionally asking you a gotcha, and you skewer the gotcha, that’s a win for you and a loss for them.
But if you’re around people who see it as a jerk move to insist on precision when something meaningful actually depends on that precision, then you need to replace those people.
Huh? First, Linda’s occupation in the original example is trivial, since I don’t know Linda and could not care less about what she does for a living.
And “replacing” people is not how life works. To be successful, you’ll need navigate (without replacing) all types of folks.
And if they were intentionally asking you a gotcha, and you skewer the gotcha, that’s a win for you and a loss for them.
This sounds weird to me. Who does this?
Anyway… I get the conjunction fallacy. There are plenty of useful applications for it. I still think the core of how it is presented around here is goofy. Of course additional conjunctions = lower probability. And yep, that isn’t instantly intuitive so it’s good to know.
Kooky—no matter how often I’m told to shut up and multiply.
There’s a certain irony to saying this right after you got done talking about the typical-mind fallacy.
“Torture vs. Dust Specks” is one of my least favorite posts on LW, but not because I disagree with the community’s conclusions. (I do in letter but not in spirit; I’d pick “specks” as it’s stated, but that’s because my idea of the pain:suffering mapping, while consequentialist, is non-utilitarian.) Rather, it’s proven to be inflammatory far out of proportion to its value as a teaching tool: new posts under it tend to generate more uninformative controversy than actual trolling, even though they’re almost always sincere. Almost as bad, we tend to get hung up on useless details of the scenario, even through transposing the core dilemma into any consequential ethic (and a number of non-consequential ones) should be trivial.
A sane community would have realized this, shut the monster up in the proverbial attic, and never spoken of it again. We’ve instead decided to hold a party in said attic whenever it comes up, with the monster as the star attraction.
There’s a certain irony to saying this right after you got done talking about the typical-mind fallacy.
Ha. Good point. :)
Perhaps we largely agree, though I think dust specks was a more terrible option to choose for the thought experiment than it sounds like you do. It doesn’t work. At all. It’s not even interesting… and it was kooky in mind that so many people were pretending like this was some sort of real ethical dilemma.
If you had something that was actually painful to compare the torture to, then you’d have a more difficult putt. As it was, the LWer was presented with something that wasn’t even a marginal inconvenience (dust speck), told to “shut up and multiply” by a big number to arrive at a true and unbiased view of ethics...and people actually listened and agreed with this viewpoint.
It might be the culty-est moment of LW. Blindly justifying utter nonsense in the mind of the hive. (It reminds me of the Archer meme… “Do you want people to dismiss you as a crankish cult? Because that’s how you get people to dismiss you as a crankish cult.” Ha.)
Note: I just mistyped a word and had to delete one letter… how much torture is that marginal inconvenience worth according to DSET (Dust Speck Ethics Theory)? ;)
Okay, guess it falls on me to bring out a party hat for the monster.
I don’t really want to get into the details; there’s a thread for that and it isn’t this one. But I’ll just briefly note that “Specks” is nothing more or less than what you get when you actually take utilitarianism (or some of its relatives) seriously. It breaks if you don’t treat all discomfort as a single moral evil (or if a dust speck doesn’t register as discomfort), or if you don’t treat everyone’s discomfort as commensurate, but that’s precisely what utilitarianism does—and as a serious ethical theory it’s much older than LW.
The dilemma’s ill-posed in several ways, yes; it’s been proven many times over to mindkill people; and in any crowd other than this one it’d be a reductio of utilitarianism. But the logic does make sense to me; I just don’t buy the premises.
(Incidentally, I’m still not sure whether Eliezer was going for a positive answer. Hardline utilitarianism seems at odds with what he’s written elsewhere on the subject of suffering, particularly in Three Worlds Collide—and note that he never takes an explicit position, he just says that it’s obvious.)
It might be the culty-est moment of LW. Blindly justifying utter nonsense in the mind of the hive.
I feel that dubious honor goes to the moment when we elected to use an invented word for “cult” in order to decrease the search engine presence of “Less Wrong” + “cult”.
“Lifeism” on LW—It’s weird to me that some folks (who are very familiar with typical mind fallacy) cannot accept the fact that some people wouldn’t want to live forever.
I have no problem with people who don’t want to live forever (or even for an incredibly long time). Part of my transhumanism is that people should be allowed to die on their own terms. Sure, it makes me sad that my family will one day die, but it’s not my place to make that decision for them.
What I do have a problem with is people dismissing anti-deathism without giving proper arguments (mostly just accepting the status-quo) or telling me I also should accept death as a neutral or positive thing.
Shelly Kagen was helpful to me in being more accepting of death.
I grew up Evangelical Christian and I’m often fascinated by what I view as a case of something like death denial in the LW/cryonics/transhumanism crowd. It reminds me of the people I knew who embraced religion as a death transcendence mechanism.
should accept death as a neutral or positive thing
There is gratuitous pain that often accompanies the dying process. Plus, loved ones will miss you—that sucks. But “death” is just a transition to non-existence. If you stop existing—and are unaware of your non-existence—that seems utterly neutral by any measure. (The only counterargument I remember is some sort opportunity cost plea whereby staying alive allows you to accumulate more utilons and fuzzies...therefore death = bad.)
Further, from a evolutionary standpoint, it seems we should be aware that the bias against death is likely extremely strong, since any species without a strong “anti-death” drive likely died out. It’s part of the reason it irked me about LW that some argued so vehemently that death is rationally bad.
As something perhaps related to this… is it possible LW became dogmatic and stubborn over time and it generated the sort of place that wasn’t that interesting to follow because nearly everything had already been said?
I’d like to believe I came to accept a lot of the LW common views as I dug into the sequences and realized many misconceptions I held about reality. (Perhaps there is some bias I’m unaware of that is causing me to believe I’m less biased than I am?) But I’ve noticed EY among others here who agree with him seemed to just dig in to their views more deeply as time went by.
A couple examples:
Conjunction Fallacy and the Linda Problem—It just isn’t difficult at all to see why this is not as much a case of people’s weakness to comprehend formal mathematical probablities as it socially functioning adults’ desire to engage in non-awkward conversations. If I remember right, EY wrote some long post that tl;dr (paraphrase) said the conjunction fallacy must exist because it’s been studied a lot.
“Lifeism” on LW—It’s weird to me that some folks (who are very familiar with typical mind fallacy) cannot accept the fact that some people wouldn’t want to live forever. The failure to have the option to exist indefinitely just isn’t that big a deal to some people, and this is—it seems to me—the sort of thing that many on LW seem(ed) intent to prove was mathematically in error.
Dust Specks v. Torture—Kooky—no matter how often I’m told to shut up and multiply.
Anyway, the one reason I liked LW is it was really smart people who were willing to change their views based on evidence and the pursuit of reality. I don’t claim to have done some sort of exhaustive study on the all the material here (nor could I since a good-sized chunk of it is above my head), but I think it suffers from all the same sorts of problems and biases typical internet community hiveminds do. And maybe that just got old and annoying to people?
I do believe that people have done experiments specifically to test this interpretation, and found that the Conjunction Fallacy does actually exist, in basically the way the Linda Problem suggests it does. That is, it’s not just “but we repeated the experiment and are sure there’s the effect we measured” but “we considered alternative explanations, and did experiments to confirm or disconfirm those explanations, and they’re disconfirmed.”
I think you are right. And I think the conjunction fallacy as a weakness to intuit probability is real. (The Monty Hall problem vexes my intuition once about every 18 months)
But I think it was vastly overstated and does not apply to “real life” situations in nearly the same way as in testing environments.
If someone approaches me and says “John is super athletic and 7.5 feet tall. Which is probably true? John is a bank teller? Or John is a bank teller and played NBA basketball...”
I’ll think they’re probably telling me about John, in part, because he has acheived something noteworthy—like playing NBA basketball.
I won’t hardly give one damn about if I’m right about this random person’s probability quiz regarding John. I’ll just be polite and give it a quick guess.
It might occur to me the person is mistaken… and that this person’s mistakeness is a higher probability a 7.5 foot super athletic guy turned down big cash and fame to be a bank teller.
At any rate, to my recall, some LWers just seemed to be out of touch with the real world on this one.
It’s certainly easier to demonstrate in testing environments. But I think the mistake of using ‘representativeness’ to judge probability does come up quite a bit in real life situations.
But… it’s still a conjunction! You shouldn’t think John becomes more likely when another constraint is put on it. You might ask “did the first John never play in the NBA, or does that cover both cases?”
Typically, human minds are set up to deal with stories, not math, and using stories when math is appropriate is a way to leave yourself easily hackable. (Mentioning the NBA should not make you think there are more bank tellers, or that bank tellers are more athletic!)
Your reply is a good example—not to pick on you—of what I’m talking about.
Of course it’s “still a conjunction”. Of course the formal probability is lower in the case of the conjunction regardless of if John is 10 feet tall and can fly. But in the real world good instrumental rationality involves the audicity to come to the conclusion that John is a NBA basketball player despite the clues in the question. The answer might be the questioner is wrong about John, and that isn’t a valid option in the lab.
I’m pretty confident that I understand your position, and to me it looks like you’re falling exactly into the trap predicted by the fallacy. Would it be a good use of our time for me to explain why? (And, I suppose, areas where I think the fallacy is dangerous?)
Sure.
No. If you did reply with this to someone who approached you in a social situation, you’d be more likely to “lose” than if you were just polite and answered the question with your best guess.
It is socially awkward to do labwork in real world social environments. So, while your follow up questions might help you win in correctly identifying the highest probability for John’s career path, you’d lose in the social exchange because you would have acted like a weirdo.
It’s good to be aware of the conjunction fallacy. It’s good to be aware of lots of the stuff on LW. But when you go around using it to mercilessly pursue rationality with no regard for decorum, you end up doing poorly in real life.
The real heart of the conjunction fallacy is mistaking P(A|B) and P(B|A). Since those look very similar, let’s try to make them more distinct: P(description|attribute) and P(attribute|description), or representativeness and likeliness*.
When you hear “NBA player,” the representativeness for ‘tall and athletic’ skyrockets. If he was an NBA player, it’s almost certain that he’s tall and athletic. But the reverse inference- how much knowing that he’s tall and athletic increases the chance that he’s an NBA player- is much lower. And while the bank teller detail is strange, you probably aren’t likely to adjust the representativeness down much because of it, even though there are probably more former NBA players who are short or got fat after leaving the league than there are former NBA players that became bank tellers. (That is, you should pay as much attention to 1% probabilities as you should to 99% probabilities when doing Bayesian calculations, because both represent similar strengths of evidence.)
When details increase, the likeliness of a story has to not increase, assuming you’re logically omniscient, which is obviously a bad assumption. If I say that I’m wearing green, and then that I’m wearing blue, it’s more likely that I’m wearing just green than wearing green and blue, because any case in which I am wearing both I am wearing green. This is the core idea of burdensome details.
So lets talk examples. When an insurance salesman comes to your door, which question will he ask: “what’s the chance that you’ll die tomorrow and leave your loved ones without anyone to care for them?” or “what’s the chance that you’ll die tomorrow of a heart attack and leave your loved ones without anyone to care for them?” The second question tells a story- and if your estimate of dying is higher because they specified the cause of death (which necessarily leaves out other potential causes!), then by telling you a long list of potential causes, as well as many vivid details about the scenario, the salesman can get your perceived risk as high as he needs it to be to justify the insurance.
Now, you may make the omniscience counterargument from before- who is to say that your baseline is any good? Maybe you thought the risk was zero, but on second thought it’s actually nonzero. But I would argue that the way to fix a fault is by doing the right thing, not a different wrong thing. You say “Wow, that is scary. But what’s the actual risk, in numeric terms?”, because if you don’t trust yourself to estimate what your total risk of death is, then you probably shouldn’t trust yourself to estimate your partial risk of death.
*I use infrequently used terms to try to make it clear that I am referring to precisely defined mathematical entities.
Agreed that it’s a good idea to be polite. Disagreed that the conjunction fallacy is just because people are polite. There are lots of experiments where people are just getting the formal math problem wrong or being primed into giving strange estimates.
But even if we suppose that the person is trying to ‘steelman the question,’ that is a dangerous thing to do in real life. “Did you get the tickets for Saturday?” She must mean Friday, because that’s when we’re going. “Yes, I got the tickets.” Friday: “I’m outside the theater, where are you?” “At work; we’re going tomorrow! …you got the tickets for tomorrow, right? Because now the show is sold out.”
Yes, it’s a good social skill to judge the level of precision the other person wants in the conversation. Responding to an unimportant anecdote with a “well actually” is generally seen as a jerk move. But if you’re around people who see it as a jerk move to insist on precision when something meaningful actually depends on that precision, then you need to replace those people.
And if they were intentionally asking you a gotcha, and you skewer the gotcha, that’s a win for you and a loss for them.
Huh? First, Linda’s occupation in the original example is trivial, since I don’t know Linda and could not care less about what she does for a living.
And “replacing” people is not how life works. To be successful, you’ll need navigate (without replacing) all types of folks.
This sounds weird to me. Who does this?
Anyway… I get the conjunction fallacy. There are plenty of useful applications for it. I still think the core of how it is presented around here is goofy. Of course additional conjunctions = lower probability. And yep, that isn’t instantly intuitive so it’s good to know.
Agreed. That’s why I gave a non-trivial example for the broader reference class of ‘steelmanning questions’ / ‘not noticing and pursuing confusion.’
Disagreed. Replacing people is costly, yes, but oftentimes the costs are worth paying.
It is one of many status games that people can play, and thus one that people sometimes do play.
There’s a certain irony to saying this right after you got done talking about the typical-mind fallacy.
“Torture vs. Dust Specks” is one of my least favorite posts on LW, but not because I disagree with the community’s conclusions. (I do in letter but not in spirit; I’d pick “specks” as it’s stated, but that’s because my idea of the pain:suffering mapping, while consequentialist, is non-utilitarian.) Rather, it’s proven to be inflammatory far out of proportion to its value as a teaching tool: new posts under it tend to generate more uninformative controversy than actual trolling, even though they’re almost always sincere. Almost as bad, we tend to get hung up on useless details of the scenario, even through transposing the core dilemma into any consequential ethic (and a number of non-consequential ones) should be trivial.
A sane community would have realized this, shut the monster up in the proverbial attic, and never spoken of it again. We’ve instead decided to hold a party in said attic whenever it comes up, with the monster as the star attraction.
Ha. Good point. :)
Perhaps we largely agree, though I think dust specks was a more terrible option to choose for the thought experiment than it sounds like you do. It doesn’t work. At all. It’s not even interesting… and it was kooky in mind that so many people were pretending like this was some sort of real ethical dilemma.
If you had something that was actually painful to compare the torture to, then you’d have a more difficult putt. As it was, the LWer was presented with something that wasn’t even a marginal inconvenience (dust speck), told to “shut up and multiply” by a big number to arrive at a true and unbiased view of ethics...and people actually listened and agreed with this viewpoint.
It might be the culty-est moment of LW. Blindly justifying utter nonsense in the mind of the hive. (It reminds me of the Archer meme… “Do you want people to dismiss you as a crankish cult? Because that’s how you get people to dismiss you as a crankish cult.” Ha.)
Note: I just mistyped a word and had to delete one letter… how much torture is that marginal inconvenience worth according to DSET (Dust Speck Ethics Theory)? ;)
Okay, guess it falls on me to bring out a party hat for the monster.
I don’t really want to get into the details; there’s a thread for that and it isn’t this one. But I’ll just briefly note that “Specks” is nothing more or less than what you get when you actually take utilitarianism (or some of its relatives) seriously. It breaks if you don’t treat all discomfort as a single moral evil (or if a dust speck doesn’t register as discomfort), or if you don’t treat everyone’s discomfort as commensurate, but that’s precisely what utilitarianism does—and as a serious ethical theory it’s much older than LW.
The dilemma’s ill-posed in several ways, yes; it’s been proven many times over to mindkill people; and in any crowd other than this one it’d be a reductio of utilitarianism. But the logic does make sense to me; I just don’t buy the premises.
(Incidentally, I’m still not sure whether Eliezer was going for a positive answer. Hardline utilitarianism seems at odds with what he’s written elsewhere on the subject of suffering, particularly in Three Worlds Collide—and note that he never takes an explicit position, he just says that it’s obvious.)
I feel that dubious honor goes to the moment when we elected to use an invented word for “cult” in order to decrease the search engine presence of “Less Wrong” + “cult”.
I have no problem with people who don’t want to live forever (or even for an incredibly long time). Part of my transhumanism is that people should be allowed to die on their own terms. Sure, it makes me sad that my family will one day die, but it’s not my place to make that decision for them.
What I do have a problem with is people dismissing anti-deathism without giving proper arguments (mostly just accepting the status-quo) or telling me I also should accept death as a neutral or positive thing.
Shelly Kagen was helpful to me in being more accepting of death.
I grew up Evangelical Christian and I’m often fascinated by what I view as a case of something like death denial in the LW/cryonics/transhumanism crowd. It reminds me of the people I knew who embraced religion as a death transcendence mechanism.
There is gratuitous pain that often accompanies the dying process. Plus, loved ones will miss you—that sucks. But “death” is just a transition to non-existence. If you stop existing—and are unaware of your non-existence—that seems utterly neutral by any measure. (The only counterargument I remember is some sort opportunity cost plea whereby staying alive allows you to accumulate more utilons and fuzzies...therefore death = bad.)
Further, from a evolutionary standpoint, it seems we should be aware that the bias against death is likely extremely strong, since any species without a strong “anti-death” drive likely died out. It’s part of the reason it irked me about LW that some argued so vehemently that death is rationally bad.