That is an interesting and concerning view. Cryonics makes the usual argument:
You want to live forever
Cryonics has a chance of working
Therefore, you should take out a cryonics policy,
And the average person does not agree with the conclusion. They might not be consciously aware of why they don’t want to live forever, but they damn well know that idea doesn’t appeal to them. The cryonics advocate presses them for a reason, and the average person unknowingly rationalises when they give their reason—they refuse the second premise on some grounds—scam, won’t work, evil future empire, whatever. The cryonics advocate resolves that concern, demonstrates that cryonics does have a chance of working, and the person continues to refuse.
Cryonics advocate checks if they refuse premise 1 - person emphatically responds that they love life not because they actually do, but because it is a huge status hit / social faux pas / Bad Thing (tm) to admit they don’t. Actually, their life sucks, and dragging it out forever will make it worse, but they can’t say this out loud—they probably can’t even think it to themselves.
Wow. It’s kinda scary to think that people refusing cryonics is a case of revealed preferences, and that revealed preference is that they don’t like life. Actually, it might not be scary, it might just be against social norms. But I’d like to think I genuinely like life and want life to be worth living for everyone. Of course, I’d say that if it was a social norm to say that. Damn.
People don’t find flimsy excuses to refuse conventional life-saving treatments, and non-conventional treatments can become conventional (say, antibiotics). This holds, though less so, even if the treatments cost quality of life and money.
I didn’t start out liking life, but I seem to be very atypical in that regard (often suffer from anhedonia, for example). But it’s more likely that I’ve moved away from the norm, not toward it, especially since I’m bad at distinguishing norms for X from norms for “X”… shudder
That logic only holds if there’s no cost, or no alternate investment. Currently the cost of cryonics is ~$28,000. If I donated that to GiveWell instead, I’d be saving ~28 lives. The question of whether I want to be immortal or save 28 mortal lives, is not one I’ve seen much addressed, and not one that I’ve yet found a satisfying answer to.
I’ve given it a lot of thought, and this does appear to be my True Rejection of Cryonics; if I can find a satisfying reasoning to value my immortality over those 28 mortal lives, I’d sign up.
Getting seriously sick of hearing “VillageReach beats cryonics” from people who don’t also say “VillageReach beats movies, cars, and dentists. spits out rotten teeth”. We do have a few heroes like that here (Rain and juliawise), but if you are not one quit it already.
That would be stupid. If I produce, say, $5,000/year for charity, and a dentist adds even a year of productive life to me, then it’s worth $5,000 to go see that dentist. At worst I break even.
I don’t have a car, but for most people a car probably allows them to get to their job to begin with, so that’s $50K+/year in income, vs a $10K used car every few years. Again, you’d have to be really stupid not to think this is a smart investment. A rational person should optimize by getting a high paying job and donating that income to charity, not by skipping the car and working at whatever happens to be otherwise reachable.
Movies? Well, I’m an emotional being. This is the place where we do get in to personalities, but for me, personally, if I’m unhappy, my productivity drops. Going to a movie refreshes my productivity. I do better work, don’t get fired, and might even make a raise. So for me, personally, it still works out. It’s not like I’m spending $1,000/month on these things.
And, all that aside, just because I’m not a perfect philanthropist doesn’t mean I should automatically default to cryonics. Maybe I should self-modify to sign up for cryonics, or maybe I should self-modify to be more like Rain and juliawise. It’s important to ask questions and try and determine an actual answer to that. It’s easy to push for cryonics when you genuinely ignore the opportunity costs, but for those of us actually stopping to consider them, a response of “shut up, you’re no Rain” is really, amazingly unhelpful.
Given that there are 2000 people in the world signed up for cryonics, I think there’s a lot more people who have open objections to it, too. If our community’s response to “But what about VillageReach?” is really “Oh, like you’re so selfless”, we are going to lose. Rationalists ought to win.
Even if we ignore the practicalities, even if we ignore my personal situation, it’s still a damned useful question if we actually care about the rest of the world. And if you want cryonics to be mainstream like Eliezer seems to hope for, you have to actually care about the mainstream.
So, if all you have is a witty ad hominen attack about how I’m not truly selfless, kindly quit already.
Anger seems to be existing so to get the emotional level out of the way: I’m not attacking you. I think you’re cool and I like you. I’m not accusing you of not being a perfect philanthropist, or saying that if you’re not one then you deserve blame.
I admit the argument is personality-dependent in an ad-hominem-ish way, but since I got upvoted I think I’m not exclusively being an asshole here. It goes like this: If you’re the kind of person who usually takes altruistic opportunity costs into account, then it makes perfect sense that you’d care about that of cryonics. If you’re not, then it’s more likely than you’re saying “VillageReach beats cryonics”, not because you tried to evaluate it and thought of altruistic opportunity costs, but because you rejected it for other reasons, then looked for plausible rejections and hit on altruistic opportunity costs.
Would a perfect philanthropist see a dentist, drive a car, and watch movies? Yes, probably and maybe. But the algorithms that Rain and MixedNuts use to decide to watch a movie are completely different, even if they both return “yes”. Rain asks “Will this help me make and donate enough money to offset the costs, and are there any better alternatives to make me relaxed and happy and generally productive?”. MixedNuts asks “Is this nifty, and will movie geeks like me better if I watch it?”. I can claim that watching movies makes me more productive, and it’ll probably be true; but still as a matter of fact it’s not what made me decide.
Is it possible that a perfect philanthropist would buy shiny stuff and expensive end-of-life treatments but not sign up for cryonics? Yes. For example, they could have tiny conformity demons in their brain that make them have to do what society likes (either by addiction-like mechanisms or by nuking their productivity if they don’t). Since cryonics is weird, the conformity demons don’t demand it, so the money it would have cost can go to charity. But that’s still a different state of mind from obeying the conformity demons without knowing it.
Conversely, there are possible states where you don’t usually care about altruistic opportunity costs, but start doing so for cryonics for strange reasons. But it’s still an unusual state of mind, and if you don’t say why you’re in it it’s going to prompt doubt about whether it’s your true rejection.
Also, the reason I was a snappy jerk is that I’ve heard the argument a lot before. Standard arguments happen over and over and over (I should know, I read atheist blogs), and you’ve got to be willing to have them many times if you want an idea to spread; but I’d prefer Less Wrong to address the question once and move on, with the standard debate rehappening elsewhere.
I’m not sure what your argument about the mainstream is. Is it “Lots of people have this objection a lot; they wouldn’t if it sucked”, or is it “Yeah, this objection sucks, but boy do you ever need a reply that doesn’t make you sound like a complete asshole”?
I’d prefer Less Wrong to address the question once and move on, with the standard debate rehappening elsewhere.
If someone had linked me to a “one and done” article, I’d feel a lot more confident that this is a standard argument with a good/interesting answer. Instead I mostly got responses that seemed to work out to “I’m not a terribly nice person so it was simple for me” and “you’re not a terribly nice person so it should be simple for you”.
If there is a “one and done” you want to link me to, I wouldn’t object at all. I’ve read most of LessWrong, but not much else out there. I don’t think I’ve seen this specific objection addressed before.
it’s still an unusual state of mind
My mind seems to be weird in a lot of ways. For cryonics, it seems to come down to: cryonics is a far-off future thing, therefore my Planning mode gets engaged. Planning mode goes “I have more money than I need to survive. Why am I being selfish and not donating this?”
I’m not real inclined to view this as problematic, because on a certain level charity does feel good, and I like making the world a better place. On the other hand, I also grew up with a lot of bad spending habits, so my short-term thinking is very much “ooh, shiny thing, mine now”.
I will say that the idea of a $28,000 operation that gives me six more months in a hospice really bothers me—it’s a horrifically irrational or selfish thing to think I’m worth that much. If push came to shove, I’m not sure I’d have the courage and energy to refuse social norms and pressure, but the idea bothers me.
Eliezer raises a good point, that one can do both, but it implies a certain degree of financial privilege. Thus, there’s still the open question of priorities. While psychologically we have “different budgets” for different things, all of those do fundamentally come out of one big budget.
When people say “I’d only accept that argument from Rain”, it makes me wonder if I should be pursuing cryonics or being more like Rain. It’s only very recently that I’ve had much of any financial flexibility in my life, so I’m trying to figure out what to do with it. I’m trying to figure out whether I want to become the sort of person who is signed up for cryonics, or the sort of person who funnels that extra money in to charity.
If you are currently donating everything you practically can to charity, fair enough, don’t sign up for cryonics.
If you think you should but haven’t yet, then sign up for cryonics first. As a person with one foot in the future, you’re more likely to do what the future will most benefit from. As someone who avoids thoughtful spending because you feel like you should spend it on charity, you’ll end up at XKCD 871.
Cryonics only makes the difference between your seeing the future and your not seeing the future if 1) sufficiently high tech eventually gets developed by human-friendly actors, 2) it happens only after you die, 3) cryonics works, 4) nothing else goes wrong or makes cryonics irrelevant. For the median LessWronger, I would put maybe a 10% probability on the first two combined and maybe at most a 50% probability on the last two combined. So maybe at best I’d say something like cryonics gives you two and a half toes in a future where you used to have two toes.
I mean “one foot in the future” to refer to your resulting psychological state, not to a fact related to your likely personal future. I think it’s pretty unlikely I’ll be suspended and reanimated—many other fates are more likely, including never being declared dead. But I think signing up is a move towards a different attitude to the future.
But I think signing up is a move towards a different attitude to the future.
Is this just a plausible guess, or do we have other evidence that it’s true, e.g. people spontaneously citing being signed up for cryonics as causing them to feel the future is real enough to help optimally philanthropize into existence?
(I just love that I can de-escalate drama on LW. This site rocks.)
I’ll concede that the previous discussions were insufficient. Let’s make this place the “one and done” thread.
Do you accept that singling out cryonics is rather unfair, not as opposed to all spending, but as opposed to other Far expenses? To do this right we have to look at “How heroic should my sacrifices be?” in general; if we conclude cryonics is not worth the cost in circumstances X we should conclude the same thing about, say, end-of-life treatments.
I’ve tried to capture my intuitions about sacrificing a life to save several; here are the criteria that seem relevant:
Most importantly, whether it pattern-matches giving one’s life to a cause, or regular suicide. Idealism is often a good move (reasons complicated and beyond the scope of this), whereas if someone’s fine with suicide they’re probably completely broken and unable to recognize a good cause. I expect people who run into burning orphanages just think about distressed orphans, and treat risk of death like an environmental feature (like risk the door will be blocked; that doesn’t affect the general plan, just makes them route through the window), as opposed to weighing risk to themselves against risk to orphans. I endorse this; the policy consequences are quite different even if they roughly agree on “Kill self to save more” (for example CronoDAS is waiting for his parents to croak instead of offing himself right away).
Whether the lives you trade for are framed as Near or Far.
Whether the life you trade away is framed as Near or Far. (I feel cryonics as Nearer than most would, for irrevelant reasons.)
Whether the lives you trade for are framed as preventing a loss, or reaching for a gain.
Whether the life you trade away is framed as accepting a loss, or refusing a gain.
Whether the life you trade away is mine or someone else’s, and who is getting the choice.
Note knock-on effects: If someone hears of the Resistance, and is inspired to give their life to a cause, I’m happy. (If the cause is Al-Qaeda, they’ve made a mistake, but an unrelated one.) If someone hears of people practicing Really Extreme Altruism and are driven to suicide as a result, I’m sad. Refusing cryonics strikes me as closer to the latter.
That’s why I brush and floss every night, and see the dentist every 6 months. Gum disease is linked with heart disease, and damaged teeth create pain. I like to be comfortable.
Though I perform routine maintenance on my life, I try to reduce the cost as much as possible, and when I spend money, I recognize and acknowledge the tradeoffs. It’s a simple exercise to create a graph of benefit from lowest to highest, and start plotting things. This makes it easier to remember there are more alternatives.
XKCD 871: The problem of scaling the sane use of money is a problem of not crushing people’s wills, not a problem of money being a limited resource. It simply isn’t true that money spent on cryonics comes out of Givewell’s or SIAI’s pockets, unless you’re Rain, which is why I’ll accept that answer from Rain but not from you.
The question of whether I want to be immortal or save 28 mortal lives, is not one I’ve seen much addressed, and not one that I’ve yet found a satisfying answer to.
I find the answer “be immortal” satisfying, personally. Your mileage may vary.
May I ask what reasoning/evidence lead you to that conclusion? I’m sort of viewing it as a trolley problem: I can either kill my immortal self, or I can terminate 28 other lives that much sooner than they would have.
(I’m also realizing my conclusion is probably “I don’t do THAT much charitable to begin with, so let’s just go ahead and sign up, and we can re-route the insurance payoff if we suddenly become more philanthropic in the future”)
Look at it in terms of years gained instead of lives lost.
Saving 28 lives gives them each 50 years at best until they die, assuming none of them gain immortality. That’s 1400 man-years gained. Granting immortality to one person is infinity years (in theory); if you live longer than 1400 years then you’ve done the morally right thing by betting on yourself.
Additionally, money spent on cryonics isn’t thrown into a hole. A significant portion is spent on making cryonics more effective and cheaper for others to buy. Rich Americans have to buy it while it’s expensive as much as possible, so that those 28 unfortunates can ever have a chance at immortality.
The game theory makes it non-obvious. Consider the benefits of living in a society where people are discouraged from doing this kind of abstract consequentialist reasoning.
May I ask what reasoning/evidence lead you to that conclusion?
Evidence is a wrong question, and reasoning not much better. Unless, of course, you mean “evidence and reasoning about my own arbitrary preferences”. In which case my personal testimony is strong evidence and even stronger for me given that I know I am not lying.
I prefer immortality over saving 28 lives immediately. I also like the colour “blue”.
What epistemic algorithms would you run to discover more about your arbitrary preferences and to make sure you were interpreting them correctly? (Assuming you don’t have access to an FAI.) For example, what kinds of reflection/introspection or empiricism would you do, given your current level of wisdom/intelligence and a lot of time?
It’s a good question, and ruling out the FAI takes away my favourite strategy!
One thing I consider is how my verbal expressions of preference will tend to be biased. For example if I went around saying “I’d willingly give up immortality to prevent 28 strangers from starving” then I would triple check my belief to see if it was an actual preference and not a pure PR soundbite. More generally I try to bring the question down to the crude level of “what do I want?”, eliminating distracting thoughts about how things ‘should’ be. I visualize possible futures and simply pick the one I like more.
Another question I like to ask myself (and frequently find myself asked by other people while immersed in SIAI affiliated culture) is “what if an FAI or Omega told you that your actual extrapolated preference was X?”. If I find myself seriously doubting the FAI then that is rather significant evidence. (And also not an unreasonable position. The doubt is correctly directed at the method of extrapolating preferences instilled by the programmers or the Omega postulator.)
“Hey, what’s that dorky necklace you’re wearing?” Oh, this? Well, you see, it turned out I was born with a fatal disease, and this is my best shot at overcoming it. ”That necklace will arrest the progress of a fatal disease?” Yes, definitely, if a few plausible assumptions turn out right. ”How much did the necklace cost?” Oh, about $28,000. ”And what disease is this that you can somehow fight with a $28,000 necklace?” Mortality.
######”But … but … that’s not a disease!!!” ######
######Looks like someone gets tripped up by definitions a little too easily...######
Your line “Yes, definitely, if a few plausible assumptions turn out right. ” is where most people will be put off.
It strikes of dishonesty, presumably to yourself. You’re saying “definitely” and then clarifying that’s it not actually definite. Which indicates that you’re not being honest, you’re trying to give an incorrect impression. At which point, your idea of what is plausible becomes entirely untrustworthy.
Which for a person desperate to find a way to overcome a fatal disease is commonplace.
Have you spent $28,000 on nonessentials for yourself over the course of your life? Most people can easily hit that amount by having a nicer car and house/apartment than they “need”. If so then by revealed preference, you value those nonessentials over 28 statistical lives; do you also value them over a shot at immortality?
What are 28 mortal lives for one that is immortal? If I was asked to choose between the life of some being that shall live for thousands of years or the lives of thirty something people who shall live perhaps 60 or 70 years, counting the happy productive hours of life seems to favour the long lived. Of course they technically also have a tiny chance of living that long, but honestly what are the odds that absent any additional investment (which will have the opportunity cost of other short lived people), they have of matching the mentioned being’s longevity?
Now suppose I could be relatively sure that the long lived entity would work towards making the universe, as much as possible, a place that in which I, as I am today, could find some value in, but of those thirty something individuals I would know little except that they are likley to be at the very best, at about the human average when it comes to this task.
What is the difference between a certainty of a two thousand year lifespan, or the 10% chance of a 20 000 year one? Or even a 0.5% chance of a 400 000 year life span? Perhaps the being can not psychologically handle living that much longer, but having assurances that it would do its best to self-modify so it could dosen’t seem unreasonable.
Why should I then privilege the 28 because the potentially long lived being just happens to be me?
Only I can live forever. - is a powerful ethical argument if there is a slim but realistic chance of you actually achieving this.
What are 28 mortal lives for one that is immortal?
Genuine question: would you push a big red button that killed 28 African children via malaria, if it meant you got free cryonic suspension? I’m fine with a brutal “shut up and multiply” answer, I’m just not sure if you really mean it when you say you’d trade 28 mortal lives for a single immortal one.
I’m just not sure if you really mean it when you say you’d trade 28 mortal lives for a single immortal one.
Ha ha ha. I find it amusing that you should ask me of all people about this. I’d push a big red button killing through neglect 28 cute Romanian orphans if it meant a 1% or 0.5% or even 0.3% chance of revival in an age that has defeated ageing. It would free up my funds to either fund more research, or offer to donate the money to cryopreserve a famous individual (offering it to lots of them, one is bound to accept, and him accepting would be a publicity boost) or perhaps just the raw materials for another horcrux.
Also why employ children in the example? Speaking of adults the idea seemed fine, children should probably be less of a problem since they aren’t fully persons in exactly the same measure adults are no? It seems so attractive to argue to argue that killing a child costs the world more potential happy productive man years, yet have you noted that in many societies the average expected life span is so very low mostly because of the high child mortality? A 20 year old man in such a society has already passed a “great filter” so to speak. This is probably true in many states in Africa. And since we are on the subject…
There are more malnourished people in India than in all of sub-Saharan Africa, yet people always invoke an African example when wishing to “fight hunger”. This is true of say efforts to eradicate malaria or making AIDS drugs affordable or “fighting poverty” or education intiatives, ect. I wonder why? Are they more photogenic?Does helping Africans somehow signal more altruism than helping say Cambodians? I wonder.
Taken at face value, the comments above are those of a sociopath. This is so not because this individual is willing to sacrifice others in exchange for improved odds of his own survival (all of us do that every day, just by living as well as we do in the Developed World), but because he revels in it. It is even more ominous that he sees such choices as being inevitable, presumably enduring, and worst of all, desirable or just. Just as worrisome is the lack of response to this pathology on this forum, so far.
The death and destruction of other human beings is a great evil and a profound injustice. It is also extremely costly to those who survive, because in the deaths of others we lose irreplaceable experience, the opportunity to learn and grow ourselves, and not infrequently, invaluable wisdom. Even the deaths of our enemies diminishes us, if for no other reason than that they will not live long enough to see that they were wrong, and we were right.
Such a mind that wrote the words above is of a cruel and dangerous kind, because it either fails, or is incapable of grasping the value that interaction and cooperation with others offers. It is a mind that is willing to kill children or adults it doesn’t know, and is unlikely to know in a short and finite lifetime, because it does not understand that much, if not almost all of the growth and pleasure we have in life is a product of interacting with people other than ourselves, most of whom, if we are still young, we have not yet met. Such a mind is a small and fearful thing, because it cannot envision that 10, 20, 30, or 500 years hence, it may be the wisdom, the comfort, the ideas, or the very touch of a Romanian orphan or of a starving sub-Saharan African “child” from whom we derive great value, and perhaps even our own survival. One of the easiest and most effective ways to drive a man mad, and to completely break his will, is to isolate him from all contact with others. Not from contact with high intellects, saintly minds, or oracles of wisdom, but from simple human contact. Even the sociopath finds that absolutely intolerable, albeit for very different reasons than the sane man.
Cryonics has a blighted history of not just attracting a disproportionate number of sociopaths (psychopaths), but of tolerating their presence and even of providing them with succor. This has arguably has been as costly to cryonics in terms of its internal health, and thus its growth and acceptance, as any external forces which have been put forward as thwarting it. Robert Nelson was the first high profile sociopath of this kind in cryonics, and his legacy was highly visible: Chatsworth and the loss of all of the Cryonics Society of California’s patients. Regrettably, there have been many others since.
It is a beauty of the Internet that it allows to be seen what even the most sophisticated psychological testing can often not reveal: the face of the florid sociopath. Or perhaps, in this case I should say, the name of same, because putting a face to that name is another matter altogether.
Cryonics has a blighted history of not just attracting a disproportionate number of sociopaths (psychopaths), but of tolerating their presence and even of providing them with succor
Details?
I’ve seen a couple of cases of people disliking cryonics because they see its proponents as lacking sufficient gusto for life, but no cases of disliking or opposing cryonics because there are too many sociopaths associated with it.
For what it’s worth, LessWrong has done a pretty good job of firming up exactly that perspective for me.
In fairness, I don’t mind psychopathic behavior, and I’m still signing up. I’ve definitely developed a much lower opinion of cryonics advocacy since being here, though.
The post by “Voldemort” was an obvious joke/fakepost, though, and Eliezer’s comment was on the mark even if he did use a webcomic to illustrate his point...
What makes you so certain that the Voldemort post was a joke, and not simply a sociopath posting on an alternate account to avoid the social consequences of holding such a stance? Certainly, there seem to be quite a few other people here who would pick immortality over saving 28 other lives, if you put the two choices “side by side”.
Lots of people let akrasia, compartmentalization, etc. keep themselves from realizing that it’s actually a choice. When they’re put side by side and the answer is a casual “of course I’d choose my own life”, I tend to consider that stronger evidence of sociopathic behavior.
That said, yes, I consider most people to exhibit some degree of sociopathic behavior. LessWrong just demonstrates more :)
Lots of people choose luxury over saving 28 lives.
Actually, this is true even for rather low values of “luxury”. I, like tens of millions of other people in the developed world, am a homeowner. Yes, the cost of my (rather modest) home would have saved ~100 lives if I had instead donated it to a maximally effective charity. That isn’t what I did. That isn’t what the other tens of millions of homeowners did. If you want to count that as sociopathic behavior, fine. But that casts a rather wide net for what would count in that category. Is “sociopathic behavior” even a useful category if it is extended so widely? Is there much behavior left that falls outside it?
It still says something about the author of that character, that they (a) went through the effort of writing that reply and (b) there is not a single reply in the empathic/non-sociopathic direction demonstrating an equal amount of effort. I don’t really see the relevance of it being a role-playing character at all—it’s hardly incompatible that it’s both a RP character and a sociopath who has chosen a sane cover for posting their socially unacceptable views (after all, Voldemort has all of 28 karma; he clearly gets down voted a decent amount)
The simple Bayesian evidence is that someone cared enough to write a sociopathic reply that was fairly in depth, and the only non-sociopathic replies were a link to a webcomic and personal preferences of “well, yeah, I’d pick immortality over 28 lives...”
Also, lumping Clippy in with clearly fictional characters is just rude ;)
a sociopath posting on an alternate account to avoid the social consequences of holding such a stance?
There are easier ways to avoid the social consequences of holding said stance; one of them is to denounce that stance. Another is to fail to comment on the matter. Logging in to an alternate account in order to say something they don’t want to be seen saying has a small prior to begin with.
p(Author is a sociopath | Author chose to RP as Voldemort specifically) > p(Author is a sociopath | Author went with a different pseudonym) is my basic assertion here. People who roleplay sociopaths are more likely to be sociopaths—roleplaying Voldemort is a safe outlet for that tendency.
That the author is writing Voldemort also seems like evidence for the hypothesis that the author agrees with Voldemort (I’d assume possibly not to that extreme, but who knows). Much the same as everyone assumes that the author behind shokwave agrees with shokwave’s writing...
Sure, roleplaying as Voldemort may be evidence for sociopathy, but if I had to estimate how much evidence, I’d call it epsilon. Roleplaying, and humour, is fun. And fun is tempting, especially on the internet.
Roleplaying, and humour, is fun. And fun is tempting, especially on the internet.
I’ve been running campaigns for, wow, 16 years now, and I played intermittently even before then. Roleplaying is not something that is unfamiliar to me. One of the things I’ve noticed is that, for the most part, people play characters that think like they do. It is difficult for most people to play a well-developed character that doesn’t largely agree with their own personal philosophy (playing a simple caricature is much easier, but Voldemort does not strike me as such)
If it’s only an epsilon of evidence then my life is an absolutely ridiculous statistical anomaly o.o
I play roleplaying games a lot and most of my characters aren’t much like me. I’ve played evil characters, stupid characters, characters who considered violence the first and best answer, religiously devout characters, and a rainbow-obsessed boy-crazy twice-married wizardess who liked to attack her enemies with colors and wear outrageously loud outfits. I’m not evil, stupid, violent, religious, or rainbowy.
I’ve written fiction with characters of an even greater variety.
I was claiming that people like you exist, but are rare. Just like sociopaths exist, but are rare. So given the two possibilities, and knowing only that both groups are fairly rare, it would be silly to assume that someone is probably a good roleplayer instead of a sociopath.
Those mostly seem too unlike you, from what I can tell, to be clear examples of someone playing a non-caricature.
The exceptions are the devout characters. Looking back on my experience as a deontologist, I don’t think it would be too hard to role play many other deontologists, provided the rules were clear enough. So I think those characters are too like you to prove the point either, unless they were devout non-compartmentalized thinkers, i.e. “devout moderates” who aren’t in a moderate religion because of lack of faith or willpower or indeed directly because of any other character flaw.
I will simply take your word you role play characters who neither think like you do nor are caricatures, You have not lowered the amount I would have to believe you to the level of merely having to believe that you role played the listed characters, because I still have to believe that the characters are good examples, which is not self evident.
Far more people play chaotic evil than can be explained by them being fine with killing people for personal gain.
Remember that the point of all this is to substantiate the claim that roleplaying Voldemort is evidence for sociopathy, or lack of empathy. Playing a character that thinks differently isn’t quite the same as playing one with different specific moral values, and I don’t think the latter is particularly hard. Villains are often portrayed as more rational and driven than the heroes of stories (who usually get most of their wins for free), so it can be easy to identify with them if you’re a kind of person who respects those characteristics. That’s the “way of thinking” that’s attractive. The specific object-level morality is pretty much hot-swappable.
(Plus, we wouldn’t want to fall victim to the fundamental attribution error on the basis of a single blog comment, I don’t think...)
I suppose I may have been unclear. There’s often a lot of surface differences—my roommate has played a raver, a doctor, and now an AVON sales lady who fights zombies. But at the same time, there’s deeper similarities in conversational style, use of language, decision-making methods, and personal preferences that mean they all play fairly similarly (in her case, she loses her temper quickly—for some characters this makes them very verbally hostile, while others move quickly to combat)
It does also depend on your audience. Playing a “convincing” sociopath is pretty easy if no one in your group knows a real sociopath. And, of course, there ARE some people who have the knack for truly capturing other mindsets. However, half the books on my shelf are from authors that can’t even convincingly write characters of the opposite sex.
Maybe Voldemort has sociopathic tendencies. Maybe they’re just a good roleplayer. However, I don’t think sociopath is really that much rarer than a good, convincing role player.
(playing a simple caricature is much easier, but Voldemort does not strike me as such)
Why thank you, I do try.
One of the things I’ve noticed is that, for the most part, people play characters that think like they do.
Except for stealing everything that isn’t nailed down you mean?
To step out of character, my regular account has 2000+ karma on LW and I don’t think I’ve been acused of sociopathy before. I guess I’m just that good at hiding it.
LW has a few role-playing characters identifiable by usernames, while others don’t appear to be playing such games and don’t use speaking usernames. So “Voldemort” is likely a fictional persona tailored to the name, rather than a handle chosen to describe a real person’s character.
Correct, though I prefer to think of it as using another man’s head to run a viable enough version of me so that I may participate in the rationalist discourse here.
LOL! You don’t have to be a genius to be evil and, speaking from long, hard and repeated experience, you don’t have to be a genius to a great deal of harm—just being evil is plenty sufficient. This is especially true when the person who has ill intentions also has disproportionately greater knowledge than you do, or than you can easily get access to in the required time frame. The classic example has been the used car salesman. But better examples are probably the kinds of situations we all encounter from time to time when we get taken advantage of.
I don’t know much about computers, so I necessarily rely on others. In an ideal world, I could take all the time necessary to make sure that the guy who is selling me hardware or software that I urgently need is giving me good advice and giving me the product that he says he is. But we don’t live in an ideal world. Many people have this kind of problem with medical treatment choices, and for the same reasons. Another, related kind of situation, is where the elapsed time between the time you contract for a service and the time you get it is very long. Insurance and pension funds are examples. Lots of mischief there, and thus lots of regulation. It doesn’t take evil geniuses in such situations to cause a lot of loss and harm.
And finally, while this may seem incredible, in my experience those few people who are both geniuses and evil, usually tell you exactly what they are about. They may not say, “I intend to torture and kill you,” but they very often will tell you with relish how they’ve tortured others, or about how they are willing to to torture and kill others. The problem for me for way too long was not taking such people seriously. Turns out, they usually are serious; deadly serious.
Right, I’m just saying, that’s how I know it’s not the real Voldemort posting.
in my experience those few people who are both geniuses and evil, usually tell you exactly what they are about. They may not say, “I intend to torture and kill you,” but they very often will tell you with relish how they’ve tortured others,
We may have different standards for “genius”; I don’t think I’ve ever heard of someone who I would classify as both malicious (negated utility function, actually wants to hurt people rather than just being selfish) and brilliant. I also doubt that any such person exists nowadays, because, you see, we’re not all dead.
A person who greatly enjoys abducting, torturing, and killing a few people every couple months is plausible, whereas a person who wants to maximize death and pain is much less so. A genius of the former kind does not kill us all.
Voldemort is the taken name of the main antagonist of the popular fantasy book series Harry Potter.
Eliezer Yudkowsky, one of the founders and main writers for lesswrong.com, also writes a Harry Potter fanfiction, called Harry Potter and the Methods of Rationality. (HPATMOR)
Because of this, several accounts on this forum are references to Harry Potter characters.
[edit] Vol de mort is also french for Flight of Death.
I feel obligated to point out that one of the links at the end of the OP was a link to Darwin’s review of the last Harry Potter movie; he knows who Voldemort the character is.
I hate to repeat myself but let me ease your mind.
Ha ha ha. I find it amusing that you should ask me of all people about this.
Only I can live forever. - is a powerful ethical argument if there is a slim but realistic chance of you actually achieving this.
...or perhaps just the raw materials for another horcrux.
Despite the risk of cluttering I even made a posts who’s only function was to clear up ambiguity:
Ah, even muggles can be sensible occasionally.
I thought it was more than probable the vast majority of readers here would be familiar with me. Perhaps I expect too much of them. I do that sometimes expect too much of people, it is arguably one of my great flaws.
When you say: “I thought it was more than probable the vast majority of readers here would be familiar with me,” you imply a static readership for this list serve, or at least a monotonic one. I don’t think either of those things would be good for this, or most other list serves with an agenda to change minds. New people will frequently be coming into the community and their very diversity may be one of their greatest values.
Voldemort is a fictional character from one of the most popular novel and movie series in the last 20 years (of which one of the top posters of this site is writing a fanfiction). I don’t think it’s too much to expect almost all english speakers with an internet connection who might have an interest in this site to have at least heard of him, regardless of whether we have a “static readership”.
Robert Nelson was the first high profile sociopath of this kind in cryonics, and his legacy was highly visible: Chatsworth and the loss of all of the Cryonics Society of California’s patients.
Nelson has also managed to get director Errol Morris to make a movie based on his version of cryonics history, which suggests that he may have the last word on his reputation, depending on how the film portrays him.
The ugly truth is that sometimes sociopaths are useful, though you are probably correct in stating that visible and prominent sociopaths that support cryonics hurt it.
As Mike Darwin pointed out above, we can’t reliably tell if a joke comment is a joke.
But you know, this isn’t my strongest objection. It’s the noise-to-signal ratio. What I’m really concerned about is the opportunity cost of recognizing a joke as a joke, and having to work harder to find the serious branches of discussion.
How much effort did it actually take you to recognize that the comment “GRYFFINDOR!” by a user named “SortingHat” is a joke? It is silly to be worried about the opportunity cost here.
There are more malnourished people in India than in all of sub-Saharan Africa
At least in the IT and call centre industries in the United States, “India” is synonymous with “cheap outsourcing bastards who are stealing our jobs.” Quite a few customers are actively hostile towards India because they “don’t speak English”, “don’t understand anything”, and are “cheap outsourcing bastards who are stealing proper American jobs”.
I absolutely hate this idiocy, but it’s a pretty compelling case not to try and use India as an emotional hook...
I’d also assume that people are primed to the idea of “Africa = poor helpless children”, so Africa is a much easier emotional hook.
It seems Lucid fox has a point. LW isn’t that heavily dominated by US based users, also dosen’t it seem wise for LW users to try and avoid such uses when thinking of difficult problems of ethics or instrumental rationality?
No, but if my example is going to evoke the opposite response in 10-20% of my audience, it’s probably a bad choice :)
avoid such uses when thinking of difficult problems of ethics or instrumental rationality?
Conceeded. I was interested in gauging emotional response, though, not an intellectual “shut up and multiply”. The question is less one of math and more one of priorities, for me.
Unfortunately, I came installed with a fairly broken evaluator of chances, which tends to consistently evaluate the probability of X happening to person P differently if P = me than if it isn’t, all else being equal… and it’s frequently true that my evaluations with respect to other people are more accurate than those with respect to me.
So I consider judgments that depend on my evaluations of the likelihood (or likely consequences) of something happening to me vs. other people suspect, because applying them depends on data that I know are suspect (even by comparison to my other judgments).
But, sure, that consideration ought not apply to someone sufficiently rational that they judge themselves no less accurately than they judge others.
Unfortunately, I came installed with a fairly broken evaluator of chances, which tends to consistently evaluate the probability of X happening to person P differently if P = me than if it isn’t, all else being equal… and it’s frequently true that my evaluations with respect to other people are more accurate than those with respect to me.
Then work towards the immortality of another. Dedicate your life to it.
That points out that people who think cryonics might work but forgo it because of the uncertainty of being bias towards themselves seldom consider committing to not get it for themselves yet provide it for another and then considering the issue while at the same time being a discreet call to join the Death Eaters.
If I donated that to GiveWell instead, I’d be saving ~28 lives.
If you donated that to VillageReach, you’d be saving about 28 lives. If you donated that to GiveWell, you’d help them to find other charities that are similarly effective.
Apologies if I was unclear: For “GiveWell”, please read “The charity most recommended by GiveWell right now, because VillageReach will probably eventually reach saturation and become non-ideal”.
Growing up religious I assumed I’d have a second different (not necessarily better), chance at life, that wouldn’t have an expiration date. As I grew up I saw the possibility grew more distant and less probable in my mind.
I still feel entitled to at least get a try at a second one. Also for the past few years I generally feel much of the things I vaule will be lost and destroyed and that they are probably objectively out of my reach to try and save. So perhaps a touch of megalomania also plays a role or maybe I just want to be the guy to scream:
That’s an interesting point. I am signed up for cryonics, but I’m actually rather ambivalent about my life. One major wrinkle is that, if cryonics does succeed, it would almost certainly have to be in a scenario where aging was solved by necessary precursor technologies. For me, a large chunk of my ambivalence is simply the anticipated decline in health as I age. By the same token, existential risks that might prevent me from, for instance, living from age 75 to age 85 tend not to worry me much.
It could also be a revealed preference that they don’t like life enough to give their fate completely into the hands of unknown future people, or simply that they don’t think the probability of successful cryonics + a good future is high enough to justify the costs.
Actually, when you put the argument for cryonics like this, it kind of sounds like a version of Pascal’s Mugging. Perhaps we could call this: Pascal’s Benefactor.
As MixedNuts pointed out, it’s Pascal’s Wager—yet you have a point. Putting the argument like this might cause the Pascal’s Wager Fallacy Fallacy (which is still one of my favourite posts on this site).
Hm! Someone I know wants to write a post called “Pascal’s Wager Fallacy Fallacy Fallacy”, because (the claim is) that post doesn’t correctly analyze the relevant social psychology involved when someone is afraid of being seen to commit to a very-possibly-indefensible-in-retrospect position where they predict they’ll be seen as to-the-other-person-unjustifiably having chosen a predictably immoral or stupid course of action, or something like that.
See this comment. (Disclaimer 1: it’s mine. Disclaimer 2: my objection isn’t really about the social psychology involved—but I think that gives it more right to use the word “fallacy”.)
Then it would make sense to call it “Not-taking-social-costs-into-consideration Fallacy” but not “Pascal’s Wager Fallacy Fallacy Fallacy”. That post wasn’t really about the feasibility of cryonics, it only made claims about the logical validity of comparing the reasoning behind cryonics to Pascal’s Wager and that’s not something that can be affected by social psychology.
That is an interesting and concerning view. Cryonics makes the usual argument:
You want to live forever
Cryonics has a chance of working
Therefore, you should take out a cryonics policy,
And the average person does not agree with the conclusion. They might not be consciously aware of why they don’t want to live forever, but they damn well know that idea doesn’t appeal to them. The cryonics advocate presses them for a reason, and the average person unknowingly rationalises when they give their reason—they refuse the second premise on some grounds—scam, won’t work, evil future empire, whatever. The cryonics advocate resolves that concern, demonstrates that cryonics does have a chance of working, and the person continues to refuse.
Cryonics advocate checks if they refuse premise 1 - person emphatically responds that they love life not because they actually do, but because it is a huge status hit / social faux pas / Bad Thing (tm) to admit they don’t. Actually, their life sucks, and dragging it out forever will make it worse, but they can’t say this out loud—they probably can’t even think it to themselves.
Wow. It’s kinda scary to think that people refusing cryonics is a case of revealed preferences, and that revealed preference is that they don’t like life. Actually, it might not be scary, it might just be against social norms. But I’d like to think I genuinely like life and want life to be worth living for everyone. Of course, I’d say that if it was a social norm to say that. Damn.
Probably false.
People don’t find flimsy excuses to refuse conventional life-saving treatments, and non-conventional treatments can become conventional (say, antibiotics). This holds, though less so, even if the treatments cost quality of life and money.
I didn’t start out liking life, but I seem to be very atypical in that regard (often suffer from anhedonia, for example). But it’s more likely that I’ve moved away from the norm, not toward it, especially since I’m bad at distinguishing norms for X from norms for “X”… shudder
Scary. Someone please disprove this.
That logic only holds if there’s no cost, or no alternate investment. Currently the cost of cryonics is ~$28,000. If I donated that to GiveWell instead, I’d be saving ~28 lives. The question of whether I want to be immortal or save 28 mortal lives, is not one I’ve seen much addressed, and not one that I’ve yet found a satisfying answer to.
I’ve given it a lot of thought, and this does appear to be my True Rejection of Cryonics; if I can find a satisfying reasoning to value my immortality over those 28 mortal lives, I’d sign up.
Getting seriously sick of hearing “VillageReach beats cryonics” from people who don’t also say “VillageReach beats movies, cars, and dentists. spits out rotten teeth”. We do have a few heroes like that here (Rain and juliawise), but if you are not one quit it already.
That would be stupid. If I produce, say, $5,000/year for charity, and a dentist adds even a year of productive life to me, then it’s worth $5,000 to go see that dentist. At worst I break even.
I don’t have a car, but for most people a car probably allows them to get to their job to begin with, so that’s $50K+/year in income, vs a $10K used car every few years. Again, you’d have to be really stupid not to think this is a smart investment. A rational person should optimize by getting a high paying job and donating that income to charity, not by skipping the car and working at whatever happens to be otherwise reachable.
Movies? Well, I’m an emotional being. This is the place where we do get in to personalities, but for me, personally, if I’m unhappy, my productivity drops. Going to a movie refreshes my productivity. I do better work, don’t get fired, and might even make a raise. So for me, personally, it still works out. It’s not like I’m spending $1,000/month on these things.
And, all that aside, just because I’m not a perfect philanthropist doesn’t mean I should automatically default to cryonics. Maybe I should self-modify to sign up for cryonics, or maybe I should self-modify to be more like Rain and juliawise. It’s important to ask questions and try and determine an actual answer to that. It’s easy to push for cryonics when you genuinely ignore the opportunity costs, but for those of us actually stopping to consider them, a response of “shut up, you’re no Rain” is really, amazingly unhelpful.
Given that there are 2000 people in the world signed up for cryonics, I think there’s a lot more people who have open objections to it, too. If our community’s response to “But what about VillageReach?” is really “Oh, like you’re so selfless”, we are going to lose. Rationalists ought to win.
Even if we ignore the practicalities, even if we ignore my personal situation, it’s still a damned useful question if we actually care about the rest of the world. And if you want cryonics to be mainstream like Eliezer seems to hope for, you have to actually care about the mainstream.
So, if all you have is a witty ad hominen attack about how I’m not truly selfless, kindly quit already.
Anger seems to be existing so to get the emotional level out of the way: I’m not attacking you. I think you’re cool and I like you. I’m not accusing you of not being a perfect philanthropist, or saying that if you’re not one then you deserve blame.
I admit the argument is personality-dependent in an ad-hominem-ish way, but since I got upvoted I think I’m not exclusively being an asshole here. It goes like this: If you’re the kind of person who usually takes altruistic opportunity costs into account, then it makes perfect sense that you’d care about that of cryonics. If you’re not, then it’s more likely than you’re saying “VillageReach beats cryonics”, not because you tried to evaluate it and thought of altruistic opportunity costs, but because you rejected it for other reasons, then looked for plausible rejections and hit on altruistic opportunity costs.
Would a perfect philanthropist see a dentist, drive a car, and watch movies? Yes, probably and maybe. But the algorithms that Rain and MixedNuts use to decide to watch a movie are completely different, even if they both return “yes”. Rain asks “Will this help me make and donate enough money to offset the costs, and are there any better alternatives to make me relaxed and happy and generally productive?”. MixedNuts asks “Is this nifty, and will movie geeks like me better if I watch it?”. I can claim that watching movies makes me more productive, and it’ll probably be true; but still as a matter of fact it’s not what made me decide.
Is it possible that a perfect philanthropist would buy shiny stuff and expensive end-of-life treatments but not sign up for cryonics? Yes. For example, they could have tiny conformity demons in their brain that make them have to do what society likes (either by addiction-like mechanisms or by nuking their productivity if they don’t). Since cryonics is weird, the conformity demons don’t demand it, so the money it would have cost can go to charity. But that’s still a different state of mind from obeying the conformity demons without knowing it.
Conversely, there are possible states where you don’t usually care about altruistic opportunity costs, but start doing so for cryonics for strange reasons. But it’s still an unusual state of mind, and if you don’t say why you’re in it it’s going to prompt doubt about whether it’s your true rejection.
Also, the reason I was a snappy jerk is that I’ve heard the argument a lot before. Standard arguments happen over and over and over (I should know, I read atheist blogs), and you’ve got to be willing to have them many times if you want an idea to spread; but I’d prefer Less Wrong to address the question once and move on, with the standard debate rehappening elsewhere.
I’m not sure what your argument about the mainstream is. Is it “Lots of people have this objection a lot; they wouldn’t if it sucked”, or is it “Yeah, this objection sucks, but boy do you ever need a reply that doesn’t make you sound like a complete asshole”?
Thank you for the calm, insightful response :)
If someone had linked me to a “one and done” article, I’d feel a lot more confident that this is a standard argument with a good/interesting answer. Instead I mostly got responses that seemed to work out to “I’m not a terribly nice person so it was simple for me” and “you’re not a terribly nice person so it should be simple for you”.
If there is a “one and done” you want to link me to, I wouldn’t object at all. I’ve read most of LessWrong, but not much else out there. I don’t think I’ve seen this specific objection addressed before.
My mind seems to be weird in a lot of ways. For cryonics, it seems to come down to: cryonics is a far-off future thing, therefore my Planning mode gets engaged. Planning mode goes “I have more money than I need to survive. Why am I being selfish and not donating this?”
I’m not real inclined to view this as problematic, because on a certain level charity does feel good, and I like making the world a better place. On the other hand, I also grew up with a lot of bad spending habits, so my short-term thinking is very much “ooh, shiny thing, mine now”.
I will say that the idea of a $28,000 operation that gives me six more months in a hospice really bothers me—it’s a horrifically irrational or selfish thing to think I’m worth that much. If push came to shove, I’m not sure I’d have the courage and energy to refuse social norms and pressure, but the idea bothers me.
Eliezer raises a good point, that one can do both, but it implies a certain degree of financial privilege. Thus, there’s still the open question of priorities. While psychologically we have “different budgets” for different things, all of those do fundamentally come out of one big budget.
When people say “I’d only accept that argument from Rain”, it makes me wonder if I should be pursuing cryonics or being more like Rain. It’s only very recently that I’ve had much of any financial flexibility in my life, so I’m trying to figure out what to do with it. I’m trying to figure out whether I want to become the sort of person who is signed up for cryonics, or the sort of person who funnels that extra money in to charity.
If you are currently donating everything you practically can to charity, fair enough, don’t sign up for cryonics.
If you think you should but haven’t yet, then sign up for cryonics first. As a person with one foot in the future, you’re more likely to do what the future will most benefit from. As someone who avoids thoughtful spending because you feel like you should spend it on charity, you’ll end up at XKCD 871.
Cryonics only makes the difference between your seeing the future and your not seeing the future if 1) sufficiently high tech eventually gets developed by human-friendly actors, 2) it happens only after you die, 3) cryonics works, 4) nothing else goes wrong or makes cryonics irrelevant. For the median LessWronger, I would put maybe a 10% probability on the first two combined and maybe at most a 50% probability on the last two combined. So maybe at best I’d say something like cryonics gives you two and a half toes in a future where you used to have two toes.
I mean “one foot in the future” to refer to your resulting psychological state, not to a fact related to your likely personal future. I think it’s pretty unlikely I’ll be suspended and reanimated—many other fates are more likely, including never being declared dead. But I think signing up is a move towards a different attitude to the future.
Is this just a plausible guess, or do we have other evidence that it’s true, e.g. people spontaneously citing being signed up for cryonics as causing them to feel the future is real enough to help optimally philanthropize into existence?
It’s a guess.
If there were a one-and-done answer, I think this’d be it.
(I just love that I can de-escalate drama on LW. This site rocks.)
I’ll concede that the previous discussions were insufficient. Let’s make this place the “one and done” thread.
Do you accept that singling out cryonics is rather unfair, not as opposed to all spending, but as opposed to other Far expenses? To do this right we have to look at “How heroic should my sacrifices be?” in general; if we conclude cryonics is not worth the cost in circumstances X we should conclude the same thing about, say, end-of-life treatments.
I’ve tried to capture my intuitions about sacrificing a life to save several; here are the criteria that seem relevant:
Most importantly, whether it pattern-matches giving one’s life to a cause, or regular suicide. Idealism is often a good move (reasons complicated and beyond the scope of this), whereas if someone’s fine with suicide they’re probably completely broken and unable to recognize a good cause. I expect people who run into burning orphanages just think about distressed orphans, and treat risk of death like an environmental feature (like risk the door will be blocked; that doesn’t affect the general plan, just makes them route through the window), as opposed to weighing risk to themselves against risk to orphans. I endorse this; the policy consequences are quite different even if they roughly agree on “Kill self to save more” (for example CronoDAS is waiting for his parents to croak instead of offing himself right away).
Whether the lives you trade for are framed as Near or Far.
Whether the life you trade away is framed as Near or Far. (I feel cryonics as Nearer than most would, for irrevelant reasons.)
Whether the lives you trade for are framed as preventing a loss, or reaching for a gain.
Whether the life you trade away is framed as accepting a loss, or refusing a gain.
Whether the life you trade away is mine or someone else’s, and who is getting the choice.
Note knock-on effects: If someone hears of the Resistance, and is inspired to give their life to a cause, I’m happy. (If the cause is Al-Qaeda, they’ve made a mistake, but an unrelated one.) If someone hears of people practicing Really Extreme Altruism and are driven to suicide as a result, I’m sad. Refusing cryonics strikes me as closer to the latter.
That’s why I brush and floss every night, and see the dentist every 6 months. Gum disease is linked with heart disease, and damaged teeth create pain. I like to be comfortable.
Though I perform routine maintenance on my life, I try to reduce the cost as much as possible, and when I spend money, I recognize and acknowledge the tradeoffs. It’s a simple exercise to create a graph of benefit from lowest to highest, and start plotting things. This makes it easier to remember there are more alternatives.
I just really really dislike the idea of dying. Singing up for cryonics refreshes my productivity.
Heh, I never thought of it that way. Neat :)
Actually, I started flossing just recently. Those little floss picks are inexpensive and work great.
XKCD 871: The problem of scaling the sane use of money is a problem of not crushing people’s wills, not a problem of money being a limited resource. It simply isn’t true that money spent on cryonics comes out of Givewell’s or SIAI’s pockets, unless you’re Rain, which is why I’ll accept that answer from Rain but not from you.
I find the answer “be immortal” satisfying, personally. Your mileage may vary.
May I ask what reasoning/evidence lead you to that conclusion? I’m sort of viewing it as a trolley problem: I can either kill my immortal self, or I can terminate 28 other lives that much sooner than they would have.
(I’m also realizing my conclusion is probably “I don’t do THAT much charitable to begin with, so let’s just go ahead and sign up, and we can re-route the insurance payoff if we suddenly become more philanthropic in the future”)
Look at it in terms of years gained instead of lives lost.
Saving 28 lives gives them each 50 years at best until they die, assuming none of them gain immortality. That’s 1400 man-years gained. Granting immortality to one person is infinity years (in theory); if you live longer than 1400 years then you’ve done the morally right thing by betting on yourself.
Additionally, money spent on cryonics isn’t thrown into a hole. A significant portion is spent on making cryonics more effective and cheaper for others to buy. Rich Americans have to buy it while it’s expensive as much as possible, so that those 28 unfortunates can ever have a chance at immortality.
The game theory makes it non-obvious. Consider the benefits of living in a society where people are discouraged from doing this kind of abstract consequentialist reasoning.
Evidence is a wrong question, and reasoning not much better. Unless, of course, you mean “evidence and reasoning about my own arbitrary preferences”. In which case my personal testimony is strong evidence and even stronger for me given that I know I am not lying.
I prefer immortality over saving 28 lives immediately. I also like the colour “blue”.
What epistemic algorithms would you run to discover more about your arbitrary preferences and to make sure you were interpreting them correctly? (Assuming you don’t have access to an FAI.) For example, what kinds of reflection/introspection or empiricism would you do, given your current level of wisdom/intelligence and a lot of time?
It’s a good question, and ruling out the FAI takes away my favourite strategy!
One thing I consider is how my verbal expressions of preference will tend to be biased. For example if I went around saying “I’d willingly give up immortality to prevent 28 strangers from starving” then I would triple check my belief to see if it was an actual preference and not a pure PR soundbite. More generally I try to bring the question down to the crude level of “what do I want?”, eliminating distracting thoughts about how things ‘should’ be. I visualize possible futures and simply pick the one I like more.
Another question I like to ask myself (and frequently find myself asked by other people while immersed in SIAI affiliated culture) is “what if an FAI or Omega told you that your actual extrapolated preference was X?”. If I find myself seriously doubting the FAI then that is rather significant evidence. (And also not an unreasonable position. The doubt is correctly directed at the method of extrapolating preferences instilled by the programmers or the Omega postulator.)
Rephrasing it as my favorite argument...
“Hey, what’s that dorky necklace you’re wearing?”
Oh, this? Well, you see, it turned out I was born with a fatal disease, and this is my best shot at overcoming it.
”That necklace will arrest the progress of a fatal disease?”
Yes, definitely, if a few plausible assumptions turn out right.
”How much did the necklace cost?”
Oh, about $28,000.
”And what disease is this that you can somehow fight with a $28,000 necklace?”
Mortality.
######”But … but … that’s not a disease!!!” ######
######Looks like someone gets tripped up by definitions a little too easily...######
Your line “Yes, definitely, if a few plausible assumptions turn out right. ” is where most people will be put off.
It strikes of dishonesty, presumably to yourself. You’re saying “definitely” and then clarifying that’s it not actually definite. Which indicates that you’re not being honest, you’re trying to give an incorrect impression. At which point, your idea of what is plausible becomes entirely untrustworthy.
Which for a person desperate to find a way to overcome a fatal disease is commonplace.
I agree with what you say, but the rest of the discussion could go essentially unchanged if the line
were replaced with
“Perhaps, my best estimate of the odds are 1% or so”
(which would be my response in an analogous discussion)
I think that what seems to me to be the main point of the dialog,
is fairly insensitive to a wide range of possible odds for cryonics working.
Have you spent $28,000 on nonessentials for yourself over the course of your life? Most people can easily hit that amount by having a nicer car and house/apartment than they “need”. If so then by revealed preference, you value those nonessentials over 28 statistical lives; do you also value them over a shot at immortality?
You have not considered this thoroughly.
What are 28 mortal lives for one that is immortal? If I was asked to choose between the life of some being that shall live for thousands of years or the lives of thirty something people who shall live perhaps 60 or 70 years, counting the happy productive hours of life seems to favour the long lived. Of course they technically also have a tiny chance of living that long, but honestly what are the odds that absent any additional investment (which will have the opportunity cost of other short lived people), they have of matching the mentioned being’s longevity?
Now suppose I could be relatively sure that the long lived entity would work towards making the universe, as much as possible, a place that in which I, as I am today, could find some value in, but of those thirty something individuals I would know little except that they are likley to be at the very best, at about the human average when it comes to this task.
What is the difference between a certainty of a two thousand year lifespan, or the 10% chance of a 20 000 year one? Or even a 0.5% chance of a 400 000 year life span? Perhaps the being can not psychologically handle living that much longer, but having assurances that it would do its best to self-modify so it could dosen’t seem unreasonable.
Why should I then privilege the 28 because the potentially long lived being just happens to be me?
Only I can live forever. - is a powerful ethical argument if there is a slim but realistic chance of you actually achieving this.
Genuine question: would you push a big red button that killed 28 African children via malaria, if it meant you got free cryonic suspension? I’m fine with a brutal “shut up and multiply” answer, I’m just not sure if you really mean it when you say you’d trade 28 mortal lives for a single immortal one.
Ha ha ha. I find it amusing that you should ask me of all people about this. I’d push a big red button killing through neglect 28 cute Romanian orphans if it meant a 1% or 0.5% or even 0.3% chance of revival in an age that has defeated ageing. It would free up my funds to either fund more research, or offer to donate the money to cryopreserve a famous individual (offering it to lots of them, one is bound to accept, and him accepting would be a publicity boost) or perhaps just the raw materials for another horcrux.
Also why employ children in the example? Speaking of adults the idea seemed fine, children should probably be less of a problem since they aren’t fully persons in exactly the same measure adults are no? It seems so attractive to argue to argue that killing a child costs the world more potential happy productive man years, yet have you noted that in many societies the average expected life span is so very low mostly because of the high child mortality? A 20 year old man in such a society has already passed a “great filter” so to speak. This is probably true in many states in Africa. And since we are on the subject…
There are more malnourished people in India than in all of sub-Saharan Africa, yet people always invoke an African example when wishing to “fight hunger”. This is true of say efforts to eradicate malaria or making AIDS drugs affordable or “fighting poverty” or education intiatives, ect. I wonder why? Are they more photogenic?Does helping Africans somehow signal more altruism than helping say Cambodians? I wonder.
Taken at face value, the comments above are those of a sociopath. This is so not because this individual is willing to sacrifice others in exchange for improved odds of his own survival (all of us do that every day, just by living as well as we do in the Developed World), but because he revels in it. It is even more ominous that he sees such choices as being inevitable, presumably enduring, and worst of all, desirable or just. Just as worrisome is the lack of response to this pathology on this forum, so far.
The death and destruction of other human beings is a great evil and a profound injustice. It is also extremely costly to those who survive, because in the deaths of others we lose irreplaceable experience, the opportunity to learn and grow ourselves, and not infrequently, invaluable wisdom. Even the deaths of our enemies diminishes us, if for no other reason than that they will not live long enough to see that they were wrong, and we were right.
Such a mind that wrote the words above is of a cruel and dangerous kind, because it either fails, or is incapable of grasping the value that interaction and cooperation with others offers. It is a mind that is willing to kill children or adults it doesn’t know, and is unlikely to know in a short and finite lifetime, because it does not understand that much, if not almost all of the growth and pleasure we have in life is a product of interacting with people other than ourselves, most of whom, if we are still young, we have not yet met. Such a mind is a small and fearful thing, because it cannot envision that 10, 20, 30, or 500 years hence, it may be the wisdom, the comfort, the ideas, or the very touch of a Romanian orphan or of a starving sub-Saharan African “child” from whom we derive great value, and perhaps even our own survival. One of the easiest and most effective ways to drive a man mad, and to completely break his will, is to isolate him from all contact with others. Not from contact with high intellects, saintly minds, or oracles of wisdom, but from simple human contact. Even the sociopath finds that absolutely intolerable, albeit for very different reasons than the sane man.
Cryonics has a blighted history of not just attracting a disproportionate number of sociopaths (psychopaths), but of tolerating their presence and even of providing them with succor. This has arguably has been as costly to cryonics in terms of its internal health, and thus its growth and acceptance, as any external forces which have been put forward as thwarting it. Robert Nelson was the first high profile sociopath of this kind in cryonics, and his legacy was highly visible: Chatsworth and the loss of all of the Cryonics Society of California’s patients. Regrettably, there have been many others since.
It is a beauty of the Internet that it allows to be seen what even the most sophisticated psychological testing can often not reveal: the face of the florid sociopath. Or perhaps, in this case I should say, the name of same, because putting a face to that name is another matter altogether.
I imagine that’s the point of writing under a Voldemort persona.
A Dark Lord, no less!
Details?
I’ve seen a couple of cases of people disliking cryonics because they see its proponents as lacking sufficient gusto for life, but no cases of disliking or opposing cryonics because there are too many sociopaths associated with it.
For what it’s worth, LessWrong has done a pretty good job of firming up exactly that perspective for me.
In fairness, I don’t mind psychopathic behavior, and I’m still signing up. I’ve definitely developed a much lower opinion of cryonics advocacy since being here, though.
I’m curious as to what brought you to these conclusions. Can you explain further?
Well, that line captures a lot of it.
Eliezer’s response was to link me to an XKCD comic.
So, thus far, the quality of discourse here has been sociopathic fictional characters and webcomics...
The post by “Voldemort” was an obvious joke/fakepost, though, and Eliezer’s comment was on the mark even if he did use a webcomic to illustrate his point...
What makes you so certain that the Voldemort post was a joke, and not simply a sociopath posting on an alternate account to avoid the social consequences of holding such a stance? Certainly, there seem to be quite a few other people here who would pick immortality over saving 28 other lives, if you put the two choices “side by side”.
Lots of people choose luxury over saving 28 lives. Doing so may be wrong, but if it’s that common, it can’t be strongly indicative of sociopathy.
Lots of people let akrasia, compartmentalization, etc. keep themselves from realizing that it’s actually a choice. When they’re put side by side and the answer is a casual “of course I’d choose my own life”, I tend to consider that stronger evidence of sociopathic behavior.
That said, yes, I consider most people to exhibit some degree of sociopathic behavior. LessWrong just demonstrates more :)
I’m inclined to agree with steven0461,
Actually, this is true even for rather low values of “luxury”. I, like tens of millions of other people in the developed world, am a homeowner. Yes, the cost of my (rather modest) home would have saved ~100 lives if I had instead donated it to a maximally effective charity. That isn’t what I did. That isn’t what the other tens of millions of homeowners did. If you want to count that as sociopathic behavior, fine. But that casts a rather wide net for what would count in that category. Is “sociopathic behavior” even a useful category if it is extended so widely? Is there much behavior left that falls outside it?
The Voldemort account is overtly a role-playing character, which are not that uncommon here (see also: Quirinus_Quirrell, GLaDOS, and Clippy).
It still says something about the author of that character, that they (a) went through the effort of writing that reply and (b) there is not a single reply in the empathic/non-sociopathic direction demonstrating an equal amount of effort. I don’t really see the relevance of it being a role-playing character at all—it’s hardly incompatible that it’s both a RP character and a sociopath who has chosen a sane cover for posting their socially unacceptable views (after all, Voldemort has all of 28 karma; he clearly gets down voted a decent amount)
The simple Bayesian evidence is that someone cared enough to write a sociopathic reply that was fairly in depth, and the only non-sociopathic replies were a link to a webcomic and personal preferences of “well, yeah, I’d pick immortality over 28 lives...”
Also, lumping Clippy in with clearly fictional characters is just rude ;)
There are easier ways to avoid the social consequences of holding said stance; one of them is to denounce that stance. Another is to fail to comment on the matter. Logging in to an alternate account in order to say something they don’t want to be seen saying has a small prior to begin with.
p(Author is a sociopath | Author chose to RP as Voldemort specifically) > p(Author is a sociopath | Author went with a different pseudonym) is my basic assertion here. People who roleplay sociopaths are more likely to be sociopaths—roleplaying Voldemort is a safe outlet for that tendency.
That the author is writing Voldemort also seems like evidence for the hypothesis that the author agrees with Voldemort (I’d assume possibly not to that extreme, but who knows). Much the same as everyone assumes that the author behind shokwave agrees with shokwave’s writing...
Sure, roleplaying as Voldemort may be evidence for sociopathy, but if I had to estimate how much evidence, I’d call it epsilon. Roleplaying, and humour, is fun. And fun is tempting, especially on the internet.
I’ve been running campaigns for, wow, 16 years now, and I played intermittently even before then. Roleplaying is not something that is unfamiliar to me. One of the things I’ve noticed is that, for the most part, people play characters that think like they do. It is difficult for most people to play a well-developed character that doesn’t largely agree with their own personal philosophy (playing a simple caricature is much easier, but Voldemort does not strike me as such)
If it’s only an epsilon of evidence then my life is an absolutely ridiculous statistical anomaly o.o
I play roleplaying games a lot and most of my characters aren’t much like me. I’ve played evil characters, stupid characters, characters who considered violence the first and best answer, religiously devout characters, and a rainbow-obsessed boy-crazy twice-married wizardess who liked to attack her enemies with colors and wear outrageously loud outfits. I’m not evil, stupid, violent, religious, or rainbowy.
I’ve written fiction with characters of an even greater variety.
http://lesswrong.com/lw/6vq/on_the_unpopularity_of_cryonics_life_sucks_but_at/4pas
I was claiming deeper differences than that.
I was claiming that people like you exist, but are rare. Just like sociopaths exist, but are rare. So given the two possibilities, and knowing only that both groups are fairly rare, it would be silly to assume that someone is probably a good roleplayer instead of a sociopath.
Ah, I see. It wasn’t plain to me from the bare link which part of the comment you were pointing at.
Those mostly seem too unlike you, from what I can tell, to be clear examples of someone playing a non-caricature.
The exceptions are the devout characters. Looking back on my experience as a deontologist, I don’t think it would be too hard to role play many other deontologists, provided the rules were clear enough. So I think those characters are too like you to prove the point either, unless they were devout non-compartmentalized thinkers, i.e. “devout moderates” who aren’t in a moderate religion because of lack of faith or willpower or indeed directly because of any other character flaw.
I will simply take your word you role play characters who neither think like you do nor are caricatures, You have not lowered the amount I would have to believe you to the level of merely having to believe that you role played the listed characters, because I still have to believe that the characters are good examples, which is not self evident.
Far more people play chaotic evil than can be explained by them being fine with killing people for personal gain.
Remember that the point of all this is to substantiate the claim that roleplaying Voldemort is evidence for sociopathy, or lack of empathy. Playing a character that thinks differently isn’t quite the same as playing one with different specific moral values, and I don’t think the latter is particularly hard. Villains are often portrayed as more rational and driven than the heroes of stories (who usually get most of their wins for free), so it can be easy to identify with them if you’re a kind of person who respects those characteristics. That’s the “way of thinking” that’s attractive. The specific object-level morality is pretty much hot-swappable.
(Plus, we wouldn’t want to fall victim to the fundamental attribution error on the basis of a single blog comment, I don’t think...)
That’s interesting..
If my current wizard ever dies, I think I’m going to try playing a psychopathic psion. I think I’d be able to give it a decent go.
I suppose I may have been unclear. There’s often a lot of surface differences—my roommate has played a raver, a doctor, and now an AVON sales lady who fights zombies. But at the same time, there’s deeper similarities in conversational style, use of language, decision-making methods, and personal preferences that mean they all play fairly similarly (in her case, she loses her temper quickly—for some characters this makes them very verbally hostile, while others move quickly to combat)
It does also depend on your audience. Playing a “convincing” sociopath is pretty easy if no one in your group knows a real sociopath. And, of course, there ARE some people who have the knack for truly capturing other mindsets. However, half the books on my shelf are from authors that can’t even convincingly write characters of the opposite sex.
Maybe Voldemort has sociopathic tendencies. Maybe they’re just a good roleplayer. However, I don’t think sociopath is really that much rarer than a good, convincing role player.
Why thank you, I do try.
Except for stealing everything that isn’t nailed down you mean?
To step out of character, my regular account has 2000+ karma on LW and I don’t think I’ve been acused of sociopathy before. I guess I’m just that good at hiding it.
Can you expand on that claim? I find this claim to be very shocking.
http://lesswrong.com/lw/6vq/on_the_unpopularity_of_cryonics_life_sucks_but_at/4ozz I’ll go ahead and keep this to one thread for my own sanity :)
To be absolutely clear, the commenter you are responding to is a troll and a fictional character.
I’m curious as to how you know “Voldemort” is a troll?
LW has a few role-playing characters identifiable by usernames, while others don’t appear to be playing such games and don’t use speaking usernames. So “Voldemort” is likely a fictional persona tailored to the name, rather than a handle chosen to describe a real person’s character.
Who are the other role-playing characters on LessWrong?
GLaDOS started as one, though the account seems to be being used for regular interaction now.
Quirinus Quirrell.
Correct, though I prefer to think of it as using another man’s head to run a viable enough version of me so that I may participate in the rationalist discourse here.
True evil geniuses don’t reveal their intentions openly. (They also don’t post this blog comment.)
That’s what you’d like us to think.
LOL! You don’t have to be a genius to be evil and, speaking from long, hard and repeated experience, you don’t have to be a genius to a great deal of harm—just being evil is plenty sufficient. This is especially true when the person who has ill intentions also has disproportionately greater knowledge than you do, or than you can easily get access to in the required time frame. The classic example has been the used car salesman. But better examples are probably the kinds of situations we all encounter from time to time when we get taken advantage of.
I don’t know much about computers, so I necessarily rely on others. In an ideal world, I could take all the time necessary to make sure that the guy who is selling me hardware or software that I urgently need is giving me good advice and giving me the product that he says he is. But we don’t live in an ideal world. Many people have this kind of problem with medical treatment choices, and for the same reasons. Another, related kind of situation, is where the elapsed time between the time you contract for a service and the time you get it is very long. Insurance and pension funds are examples. Lots of mischief there, and thus lots of regulation. It doesn’t take evil geniuses in such situations to cause a lot of loss and harm.
And finally, while this may seem incredible, in my experience those few people who are both geniuses and evil, usually tell you exactly what they are about. They may not say, “I intend to torture and kill you,” but they very often will tell you with relish how they’ve tortured others, or about how they are willing to to torture and kill others. The problem for me for way too long was not taking such people seriously. Turns out, they usually are serious; deadly serious.
Right, I’m just saying, that’s how I know it’s not the real Voldemort posting.
We may have different standards for “genius”; I don’t think I’ve ever heard of someone who I would classify as both malicious (negated utility function, actually wants to hurt people rather than just being selfish) and brilliant. I also doubt that any such person exists nowadays, because, you see, we’re not all dead.
That’s how you know it’s not Voldemort posting?
A person who greatly enjoys abducting, torturing, and killing a few people every couple months is plausible, whereas a person who wants to maximize death and pain is much less so. A genius of the former kind does not kill us all.
The people who cause the most damage do it because they have disproportionate power rather than disproportionate knowledge.
Voldemort is the taken name of the main antagonist of the popular fantasy book series Harry Potter.
Eliezer Yudkowsky, one of the founders and main writers for lesswrong.com, also writes a Harry Potter fanfiction, called Harry Potter and the Methods of Rationality. (HPATMOR)
Because of this, several accounts on this forum are references to Harry Potter characters.
[edit] Vol de mort is also french for Flight of Death.
I feel obligated to point out that one of the links at the end of the OP was a link to Darwin’s review of the last Harry Potter movie; he knows who Voldemort the character is.
I have seen all the movies, most more than once. I have not yet read the books.
I hate to repeat myself but let me ease your mind.
Despite the risk of cluttering I even made a posts who’s only function was to clear up ambiguity:
I thought it was more than probable the vast majority of readers here would be familiar with me. Perhaps I expect too much of them. I do that sometimes expect too much of people, it is arguably one of my great flaws.
When you say: “I thought it was more than probable the vast majority of readers here would be familiar with me,” you imply a static readership for this list serve, or at least a monotonic one. I don’t think either of those things would be good for this, or most other list serves with an agenda to change minds. New people will frequently be coming into the community and their very diversity may be one of their greatest values.
Voldemort is a fictional character from one of the most popular novel and movie series in the last 20 years (of which one of the top posters of this site is writing a fanfiction). I don’t think it’s too much to expect almost all english speakers with an internet connection who might have an interest in this site to have at least heard of him, regardless of whether we have a “static readership”.
Nelson has also managed to get director Errol Morris to make a movie based on his version of cryonics history, which suggests that he may have the last word on his reputation, depending on how the film portrays him.
The ugly truth is that sometimes sociopaths are useful, though you are probably correct in stating that visible and prominent sociopaths that support cryonics hurt it.
GRYFFINDOR!
No.
This cannot be allowed to continue. These novelty accounts are cute, but you and the others are standing in the way of actual discourse.
I strongly recommend LW put an end to all novelty/roleplaying accounts, or limit them to some corner of the site.
Seconded.
That is simply not true. We have discussion trees here. We can appreciate a joke comment as a joke and continue discourse in a more serious branch.
As Mike Darwin pointed out above, we can’t reliably tell if a joke comment is a joke.
But you know, this isn’t my strongest objection. It’s the noise-to-signal ratio. What I’m really concerned about is the opportunity cost of recognizing a joke as a joke, and having to work harder to find the serious branches of discussion.
How much effort did it actually take you to recognize that the comment “GRYFFINDOR!” by a user named “SortingHat” is a joke? It is silly to be worried about the opportunity cost here.
I imagine someone said a similar thing during Reddit’s infancy.
I was there when someone said a similar thing in Everything2′s infancy.
This seems like a question which should be considered in a top-level post.
At least in the IT and call centre industries in the United States, “India” is synonymous with “cheap outsourcing bastards who are stealing our jobs.” Quite a few customers are actively hostile towards India because they “don’t speak English”, “don’t understand anything”, and are “cheap outsourcing bastards who are stealing proper American jobs”.
I absolutely hate this idiocy, but it’s a pretty compelling case not to try and use India as an emotional hook...
I’d also assume that people are primed to the idea of “Africa = poor helpless children”, so Africa is a much easier emotional hook.
It seems Lucid fox has a point. LW isn’t that heavily dominated by US based users, also dosen’t it seem wise for LW users to try and avoid such uses when thinking of difficult problems of ethics or instrumental rationality?
No, but if my example is going to evoke the opposite response in 10-20% of my audience, it’s probably a bad choice :)
Conceeded. I was interested in gauging emotional response, though, not an intellectual “shut up and multiply”. The question is less one of math and more one of priorities, for me.
(nods) Absolutely.
Unfortunately, I came installed with a fairly broken evaluator of chances, which tends to consistently evaluate the probability of X happening to person P differently if P = me than if it isn’t, all else being equal… and it’s frequently true that my evaluations with respect to other people are more accurate than those with respect to me.
So I consider judgments that depend on my evaluations of the likelihood (or likely consequences) of something happening to me vs. other people suspect, because applying them depends on data that I know are suspect (even by comparison to my other judgments).
But, sure, that consideration ought not apply to someone sufficiently rational that they judge themselves no less accurately than they judge others.
Then work towards the immortality of another. Dedicate your life to it.
That points out that people who think cryonics might work but forgo it because of the uncertainty of being bias towards themselves seldom consider committing to not get it for themselves yet provide it for another and then considering the issue while at the same time being a discreet call to join the Death Eaters.
I can’t help myself but upvote it.
(nods) Yup, that makes more sense.
Ah, even muggles can be sensible occasionally.
And a good thing too, since we’re all we’ve got.
If you donated that to VillageReach, you’d be saving about 28 lives. If you donated that to GiveWell, you’d help them to find other charities that are similarly effective.
Apologies if I was unclear: For “GiveWell”, please read “The charity most recommended by GiveWell right now, because VillageReach will probably eventually reach saturation and become non-ideal”.
Growing up religious I assumed I’d have a second different (not necessarily better), chance at life, that wouldn’t have an expiration date. As I grew up I saw the possibility grew more distant and less probable in my mind.
I still feel entitled to at least get a try at a second one. Also for the past few years I generally feel much of the things I vaule will be lost and destroyed and that they are probably objectively out of my reach to try and save. So perhaps a touch of megalomania also plays a role or maybe I just want to be the guy to scream:
“YOU MANIACS! YOU BLEW IT UP! OH, DAMN YOU! GODDAMN YOU ALL TO HELL!”
That’s an interesting point. I am signed up for cryonics, but I’m actually rather ambivalent about my life. One major wrinkle is that, if cryonics does succeed, it would almost certainly have to be in a scenario where aging was solved by necessary precursor technologies. For me, a large chunk of my ambivalence is simply the anticipated decline in health as I age. By the same token, existential risks that might prevent me from, for instance, living from age 75 to age 85 tend not to worry me much.
It could also be a revealed preference that they don’t like life enough to give their fate completely into the hands of unknown future people, or simply that they don’t think the probability of successful cryonics + a good future is high enough to justify the costs.
Actually, when you put the argument for cryonics like this, it kind of sounds like a version of Pascal’s Mugging. Perhaps we could call this: Pascal’s Benefactor.
It’s just Pascal’s regular Wager.
Edit: I mean, this presentation makes it look like Pascal’s Wager. Cryonics is too high-probability to actually be Pascal’s Wager.
As MixedNuts pointed out, it’s Pascal’s Wager—yet you have a point. Putting the argument like this might cause the Pascal’s Wager Fallacy Fallacy (which is still one of my favourite posts on this site).
Hm! Someone I know wants to write a post called “Pascal’s Wager Fallacy Fallacy Fallacy”, because (the claim is) that post doesn’t correctly analyze the relevant social psychology involved when someone is afraid of being seen to commit to a very-possibly-indefensible-in-retrospect position where they predict they’ll be seen as to-the-other-person-unjustifiably having chosen a predictably immoral or stupid course of action, or something like that.
See this comment. (Disclaimer 1: it’s mine. Disclaimer 2: my objection isn’t really about the social psychology involved—but I think that gives it more right to use the word “fallacy”.)
Then it would make sense to call it “Not-taking-social-costs-into-consideration Fallacy” but not “Pascal’s Wager Fallacy Fallacy Fallacy”. That post wasn’t really about the feasibility of cryonics, it only made claims about the logical validity of comparing the reasoning behind cryonics to Pascal’s Wager and that’s not something that can be affected by social psychology.