This is a fantastically burdensome explanation for why people don’t sign up for cryonics. Do people who do sign up for cryonics usually have happier lives? (Not that I’ve heard of.) Do the same people who turn down cryonics turn down other forms of medical care? (Not that I’ve heard of.) If we found that people signing up for cryonics were less happy on average, would we be able to construct an equally plausible-sounding symmetrical argument that people with happy, fulfilled lives see no need for a second one? (Yes.)
I hate to go into psychologizing, but I suspect that Mike Darwin wants a grand narrative of Why, Oh Why Cryonics Fails, a grand narrative that makes sense of this shocking and incomprehensible fact and gives some info on what needs to be done to relieve the frustration.
The truth is that people aren’t anything like coherent enough to refuse cryonics for a reason like that.
Asking them about cryonics gets their prerecorded verbal behaviors about “immortality” which bear no relation whatsoever to their feelings about whether or not life is fun.
Remember the fraction of people that take $500 for certain over a 15% chance of $1 million? How could you possibly need any elaborate explanation of why they don’t sign up for cryonics? Risk-aversion, loss-aversion, ambiguity-aversion, status quo bias.
Cryonics sounds strange and not-of-our-tribe and they don’t see other people doing it, a feeling expressed in words as “weird”. It’s perceptually categorized as similar to religions or other scams they’ve heard about from the newspaper, based purely on surface features and without any reference to, or remediability by, the strength of the underlying logic; that’s never checked. Mike Darwin thinks that if you have better preservation techniques, people will sign up in droves, because right now they’re hearing about cryonics and rejecting it because the preservation techniques aren’t good enough. This is obviously merely false, and the sort of thing which makes me think that Mike Darwin needs a grand narrative which tells him what to do to solve the problem, the way that Aubrey de Grey thinks that good enough rejuvenation results in mice will grandly solve deathism.
I recently got a phone call saying that, if I recall correctly, around a quarter—or maybe it was half—of all Alcor’s cryonics signups this year, are originating from LW/Yudkowsky/rationality readers. If you want people to sign up for cryonics, the method with by far the strongest conversion ratio is to train them from scratch in advanced sanity techniques. Nothing else that cryonics advocates have tried, including TV ads, has ever actually worked. There’s no simple reason people don’t sign up, no grand narrative, nothing that makes sense of cryonicists’ frustration, people are just crazy in rather simple and standard ways. The only grand narrative for beating that is “soon, your annual signups will equal 10% of the people who’ve gone through a rationality bootcamp plus 1% of the people who’ve read both Eliezer’s nonfiction book and Harry Potter and the Methods of Rationality.”
Rationality Bootcamp and Advanced Sanity Techniques? The first things sane and rational people do, are to exercise due diligence in gathering the facts before they make crazy and unfounded public statements such as:
1) “I suspect that Mike Darwin wants a grand narrative of Why, Oh Why Cryonics Fails, a grand narrative that makes sense of this shocking and incomprehensible fact and gives some info on what needs to be done to relieve the frustration.” and
2) “Mike Darwin thinks that if you have better preservation techniques, people will sign up in droves, because right now they’re hearing about cryonics and rejecting it because the preservation techniques aren’t good enough.”
Really? Not only don’t I believe those things to be true, I’ve never said that they were. Au contraire, the only grand narrative of why people haven’t embraced cryonics in droves is a very complicated one which, onto 40 years later, I’m still learning about and struggling to fully understand. In 1981 I wrote an article (with Steve Bridge) entitled “The Bricks in the Wall” about the many reasons why people find it difficult to embrace cryonics: http://www.alcor.org/cryonics/cryonics8111.txt. If I recall correctly, there were at least a dozen reasons given in that essay, including things like loss of others, loss of self, lack of technical confidence, incompatible worldview, high social cost, fear of temporal displacement… Since that article was written, I’ve learned of many more reasons why people reject cryonics and why they don’t decide to opt for it—which, as it turns out, can be two very different things.
Ironically, much of my career in cryonics has been spent arguing against “the big idea,” “the grand solution,” “the magic bullet,” or “the single rich individual who will provide the solution to the problem of why cryonics has fared so poorly.” There is no single reason, unless you want to consider the myriad individual reasons, in aggregate, as a single cause of the failure. If you insist on that approach, then the best you will do (and you could do far worse) is to note that by any normal market standards, cryonics is a shitty product. It costs a lot, it is unproven, there are many commonplace reasons to believe that existing institutional structures have a poor chance of surviving long enough for the patients to be recovered, it has been plagued by legitimate scandals and failures and the constraints imposed by the existing medico-legal infrastructure mean that, statistically, you’ve got a ~30% chance of being autopsied, or otherwise so badly degraded that whoever it is that is recovered from the procedure isn’t very likely to be you (e.g., presumably if your DNA is intact a clone could be made). So cryonics doesn’t stack up very well as a normal market product.
Having said that, if you want to ‘sell’ cryonics as part of brainwashing package, or a religion, I’d be the first to say that it can probably done. It has been my observation that you can get people to do almost anything if you rob them of their will, and subvert their reason. For myself, I don’t think that’s a good idea.
As to the issue of improved preservation techniques causing people to sign up in droves, surely you jest? Any improvement in cryopreservation techniques short of fully reversible suspended animation will ‘only’ have an incremental effect. So for example, if organ cryopreservation for the kidney were achieved tomorrow, and organ banks for kidneys opened their doors 6 months later, I would indeed expect to see an increase in people opting for cryonics, but not a stampede.
Historically, the same was true of the introduction into cryonics of credible ideas for repairing cryoinjury and of scientific documentation that brain ultrastructure was surviving cryopreservation (under ideal conditions) reasonably well. Both of those advances widened the appeal of cryonics to a very small group of people. Nevertheless, they were significant, because if you have 40 members, and such advances give you 240, or a 1,040 - then that’s a huge benefit.
Finally, if reversible whole body suspended animation were developed tomorrow, the vast majority of people would still not opt for it. In fact, they more or less never would. What would have to happen first is that a relatively small cohort of the population who command respect, authority and power, would have to decide that it is in their interest to have suspended animation become a commonplace medical treatment. By this, I do not mean to imply some focused or intelligent cabal, or group of conspirators, but rather that all kinds of empowered people in many walks of life must be persuaded before the society at large will embrace cryonics. In other words, it will be a process and probably a complex one, before Mrs. Smith sits in her doctor’s office and is either offered, or asks about, suspended animation as a possible alternative to her ending up dead from her advanced ovarian cancer.
In my opinion there are no magic bullets. Rather, there are just a lot bricks in a large wall of opposition that have to be patiently worried away, one, or a few at a time. It’s all too easy to see TV coverage of the Berlin Wall coming down and say, “Jeeze, look how quick and easy that was!” Not. The back-story needs to be considered and in the case of cryonics that back-story has been unfolding for nearly fifty years—and there are still less than 2K people signed up worldwide.
Finally, it is indeed a cruel and unpleasant reality that life isn’t very rewarding for many people, and that it all but completely lacks the zest, joy and wonderful sense of adventure that can be seen in the eyes of any well cared for child. The biology of maturation and aging do much to drain away that sense of wonder and appetite for life. But it is much more likely the case that the way we lead our lives is the primary culprit. I recommend watching multiple episodes of a TV program called “Undercover Boss.” Just watch what people who work in factories, in offices, in laundries and in loo cleaning businesses do all day. It is horrible. It is, in fact, the exact opposite of the situation we DEMAND that children be in. Indeed, one of the most repellant things to people in the West is “child labor.” Well, if the normal workaday work is so horrible for children, what makes it good for adults? And if we propose to live for millennia, and longer, then don’t we, by definition, have to be as children: open, mobile, playful and exploring in our interaction with the world? I have done all kinds of jobs, from working at Mc Donald’s (2 years) to cleaning loos and dirty motel rooms. Work is a good and character building thing. But it can also be a corrosive and soul destroying thing that robs people of any strong desire to fight for life. Methinks that perhaps you need to work at McDonald’s dressing hamburger buns for a year or two.
Having said that, if you want to ‘sell’ cryonics as part of brainwashing package, or a religion, I’d be the first to say that it can probably done. It has been my observation that you can get people to do almost anything if you rob them of their will, and subvert their reason. For myself, I don’t think that’s a good idea.
Like it or not, lesswrong is likely one of the greatest allies cryonics has right now—and I would say this is not so much because of all the new recruits and fresh blood, but because of the training in rationality that it provides and ultimately injects into the cryonics community (among the other communities it intersects with). Because of this emphasis, lesswrong is actually pretty good insurance against cryonics becoming a cult.
I just read over my post, and I didn’t say (or imply) anything about lesswrong being a cult. I know almost nothing about lesswrong, beyond reading interesting posts here, from time to time, usually as a result of google searches. My proximate reason for posting here was that Gwern suggested I do so, and also pointed me specifically to this discussion. So I guess my question would be, “Why would anyone think that I would think lesswrong was a cult?”
My remarks about “selling cryonics as part of a cult” are long-standing ones, and go back to decisions that I and others consciously made about how we wanted to proceed back in the 1970s. Having been in a cult briefly from 1974-75, I have a good understanding of the social mechanics of breaking people down and rebuilding them in a way that is “more desirable” to whomever is doing the “human re-engineering.” There was not much question in my mind then or now that many people could be “converted” to cryonics by this expedient. The questions were about “should it be done?” Ironically, I got into that cult because the founders of Alcor thought that the “guru” running the operation would make cryonics a requirement for all of his adherents. -- Mike Darwin
Perhaps I got confused about what you were replying to exactly there.
My big issue with your post is that it seems to assume there are only two options that result in widespread adoption: sell it as a traditional product, or create an odious mind-control cult. What about the option of raising people’s sanity level so they can come to the conclusion on their own?
First, I should point out that I don’t believe the choices about how to increase success for cryonics are binary, as you lay them out above. While I don’t use the same language you do, my argument has been that it is not possible to get people to freely adopt cryonics in larger numbers, unless you change them, as opposed to trying to change cryonics, or how it is “marketed.”
You use the words “raising people’s sanity level” to describe the change you believe is necessary, before they are able to choose cryonics rationally. The dictionary definition of sanity is: “The ability to think and behave in a normal and rational manner; sound mental health.” I don’t know if that is the definition you are using, or not?
Depending upon how you define “rational,” “normal,” and “sound mental health,” we may be on the same page. I would say that most people currently operate with either contra-survival values, or effectively no values. Values are the core behavioral imperatives that individuals use in furtherance of their survival and their well being. It is easy to mistake these as being all about the individual, but in fact, they necessarily involve the whole community of individuals, because it is not (currently) possible for individual humans to survive without interaction with others. Beyond these baby steps at explanation, there is a lot that must be said, but clearly, not here and not now. What I’ve said here isn’t meant to be rigorous and complete, but rather to be exemplary of the position I hold (and that you asked me about).
It is also the case that not everyone has the biological machinery to make decisions at a very high level of thought or reasoning. And amongst those who do, arguably, few do so much of the time, especially in terms of epistemological questions (and none of us do it all of the time). That’s in part what culture is for. If we considered every decision in penultimate detail, we’d never get anything done. If the culture is bankrupt, then the situation is very bad, not just for survival of the individual, but for the civilization as a whole. So, you either fix that problem, or you don’t succeed with cryonics. Put another way, the failure of this culture to embrace cryonics and life extension is a symptom of the problem, rather than the primary problem itself. -- Mike Darwin
It is also the case that not everyone has the biological machinery to make decisions at a very high level of thought or reasoning.
What do you mean by this? You seem to imply that there are structural diffrences inherent in human brains that make some people capable of “a very high level of thought and reasoning” and some people incapable. That seems unlikely or even impossible: see The Psychological Unity of Humankind.
I do agree with you that some people do sometimes make decisions at a high level of thought and reasoning, and some people rarely or never do. Unless we’re talking about actual mental retardation, I think the differences would have to be mostly based on education and culture.
Only someone who hasn’t spent much time around people with 2-digit IQ’s would believe in “the psychological unity of humankind.” The empirical evidence shows that at least in the area of IQ or the General Intelligence Factor (g), marginal differences can have profound practical consequences:
Nick Bostrom in one of his talks even argues that raising everyone’s IQ by 10 points would revolutionize our society for the better, not by making the smartest people a little bit smarter, but by making hundreds of millions of the world’s dumbasses substantially smarter so that they would become more educable, develop lower time preferences and make better decisions in life.
Only someone who hasn’t spent much time around people with 2-digit IQ’s.
I looked at that sentence and thought “but people with 2-digit IQs make up 50% of the population! Surely I’ve spent plenty of time around them!” Then I read the article, and the description of people with IQs below 100% was surprising, to the point that I’m thinking maybe there’s been some sample bias in who I’m spending my time around. (Just because about 50% of the people in my high school had IQ’s below 100 doesn’t mean there were the ones taking physics and calculus with me, and although I’ve met people in nursing school who are abominable at things that seem obvious to me, like statistics, nursing probably requires fairly high intelligence, so my “unbiased sample” is probably still biased.)
The idea is unpleasant enough that I think I have some ideological bias against intelligence being that important. Probably because it seems unfair that something basically fixed in childhood and partly or mostly genetic (i.e. beyond the individual’s control and “not their fault”) should determine their life outcome. I don’t like the idea...but admitting that intelligence differences exist won’t make it any more awful.
It’s because what EY meant by the psychological unity of humankind was more along the lines of,
… everyone has a prefrontal cortex, everyone has a cerebellum, everyone has an amygdala, everyone has neurons that run at O(20Hz), everyone plans using abstractions.
We might disagree about the last one, but the first four are pretty much fixed.
The truth is that people aren’t anything like coherent enough to refuse cryonics for a reason like that.
I agree with almost all of what you say about no grand narrative and mostly just conformity, but I’m not willing to entirely dismiss this explanation as even a small part of the puzzle. It doesn’t seem much different than the theories that poor people with few life prospects have higher temporal discount rates and are more likely to engage in risky/criminal behavior because they have less to protect. People aren’t coherent enough to think “Well, stealing this watch has a small probability of landing me in prison, but my life now isn’t so satisfying, so I suppose it’s worth the risk, and I suppose it’s worth risking a lot later for a small gain now since I currently have so little”, but there’s some inner process that gives more or less that result.
If even the few people who get past the weirdness factor flinch away from the thought of actually being alive more, I expect that would make a significant difference.
I’m going to try a test question that might differentiate between “cryonics sounds weird” and “I don’t like life enough to want to live even more” on my blog. Obviously no one from here post on that since you already know where it’s going.
If you want people to sign up for cryonics, the method with by far the strongest conversion ratio is to train them from scratch in advanced sanity techniques.
Alternate hypotheses: your followers are mostly technophile singularitarians, and technophile singularitarians are attracted to cryonics independently of rationalist training. Your followers believe there may be a positive singularity, which means the future has a reason to be much better than the present and avoid the unpleasantness Darwin describes in the article. Your followers are part of maybe the one community on earth, outside the cryonics community itself, where the highest-status figures are signed up for cryonics and people are often asked to justify why they have not done so. Your followers are part of a community where signing up for cryonics signals community affiliation. Your followers have actually heard the arguments in favor of cryonics and seen intelligent people take them seriously, which is more than 99.9% of people can say.
Judging by the experiment with the secretly identical question, I seem to have been wrong. Everyone says they would jump at the chance to be reincarnated, so lack of desire to live longer apparently doesn’t play as significant a role in cryonics refusal as I thought.
One of the reasons why I’d accept the angel’s offer but I haven’t signed up for cryonics is that in the former case I’d expect a much larger fraction of my friends to be alive when I’m resurrected.
I answered yes to your hypothetical, but I am not currently signed up for cryonics and have no short- or medium-term plans to do so.
My reasons for the difference:
In your hypothetical, I’ve received a divine revelation that there’s no afterlife, and that reincarnation would be successful. In real life, I have a low estimate of the likelihood of cryonics leading to a successful revival and a low-but-nonzero estimate of the likelihood of an afterlife.
In your hypothetical, there’s no advance cost for the reincarnation option. For cryonics, the advance cost is substantial. My demand curve for life span is downward-sloping with respect to cost.
In your hypothetical, I’m on my deathbed. In real life, I’m 99.86% confident of living at least one more year and 50% confident of living at least another 50 years (based on Social Security life expectancy tables), before adjusting for my current health status and family history of longevity (both of which incline my life expectancy upwards relative to the tables), and before adjusting for expected technological improvements. This affects my decision concerning cryonics in two respects:
a. Hyperbolic discounting.
b. Declining marginal utility of lifespan.
c. A substantial (in my estimation) chance that even without cryonics I’ll live long enough to benefit from the discovery of medical improvements that will make me immortal barring accidents, substantially reducing the expected benefit from cryonics.
In your hypothetical, I’m presented with a choice and it’s an equal effort to pick either one. To sign up for cryonics, I’d need to overcome substantial mental activation costs to research options and sign up for a plan. My instinct is to procrastinate.
Of course, none of this invalidates your hypothetical as a test of the hypothesis that people don’t sign up for cryonics because they don’t actually want to live longer.
I signed up as a result of reading Eliezer’s writings. I don’t think the first two points of your “alternate hypotheses” are really alternatives for me, since I only fall into either of those camps as a result of reading Eliezer.
I was about to comment there saying “I think I know what this is about, and if so he definitely means a younger healthy body rather than an 80-year-old one on the point of death”—but I thought I’d check here, and I’ll respect your preference for no cross-contamination. You might want to do that bit of disambiguation yourself.
Your LJ readers are probably not an entirely representative sample of people who aren’t signed up for cryonics, though perhaps they are of {people who aren’t signed up for cryonics but might be persuaded}.
I recently got a phone call saying that, if I recall correctly, around a quarter—or maybe it was half—of all Alcor’s cryonics signups this year, are originating from LW/Yudkowsky/rationality readers. If you want people to sign up for cryonics, the method with by far the strongest conversion ratio is to train them from scratch in advanced sanity techniques.
Your conclusion doesn’t follow from your premise. Moreover I don’t know what you mean by “advanced sanity techniques.” I agree that you’ve probably increased to number of cryonics signups substantially but I doubt that increased rationality has played a significant role.
Cryonics sounds strange and not-of-our-tribe and they don’t see other people doing it, a feeling expressed in words as “weird”. It’s perceptually categorized as similar to religions or other scams they’ve heard about from the newspaper, based purely on surface features and without any reference to, or remediability by, the strength of the underlying logic; that’s never checked
If you want people to sign up for cryonics, the method with by far the strongest conversion ratio is to train them from scratch in advanced sanity techniques.
The implication of the latter quote is that the sanity techniques are being applied, and cryonics is being signed up for largely because of its merits.
I think that the former quote captures more of what is going on. A community is being created in which cryonics isn’t as weird, removing previous barriers without implicating rationality directly.
I have a testable prediction that can partially parse out at least one factor. One disproportionately powerful influence on human beings in addition to (and mutually reinforcing) group think/behavior is accepting authority. (It is true that what others do is valid evidence for the validity of what they are doing, and is greater evidence the more the other(s) resemble(s) (an) optimal reasoning system(s) and is/are informed,)
I predict that if/as it becomes better known that Eliezer Yudkowsky signed up with the Cryonics Institute and not Alcor, the ratio of people signing up with Alcor and citing LW/HPATMOR to the people signing up with the Cryonics Institute and citing LW/HPATMOR will decrease.
“I think that the former quote captures more of what is going on. A community is being created in which cryonics isn’t as weird, removing previous barriers without implicating rationality directly.”
Very much so. People don’t actually believe in the future.
Unfortunately that has an element of truth in it. Cryonics now has a reputation has a paleo-future fad from the 1960′s, along with visions of space colonization, the postindustrial leisure society and the like. Many of the articles about Robert Ettinger’s recent suspension present that as a subtext in describing his career. For example. the Washington Post obit says:
Most scientists also scoffed at Mr. Ettinger’s vision, but his manifesto came as the world was adjusting to the atomic bomb, Sputnik’s robotic spacecraft and a host of other sci-fi-seeming technologies. To many at the time, Mr. Ettinger’s optimism seemed appropriate.
With the implication that in our disillusioned era, Ettinger sounds like a crank and a fool.
I’m not sure that the intent was quite that harsh. “a crank and a fool” wasn’t in the original obit. To view Ettinger’s optimism as more in keeping with the zeitgeist of the 1960s than of the 2010s does not seem wholly unreasonable. Just in stark economic terms, U.S. real median household income peaked back in 1999. The median person in the U.S. has lost quite a lot over the last decade: income, security, access to health care, perhaps social status (as Vlaimir_M pointed out). It isn’t unreasonable of them to disbelieve in an improving future.
If you want people to sign up for cryonics, the method with by far the strongest conversion ratio is to train them from scratch in advanced sanity techniques.
With all due respect, where’s the evidence that reading LW/HPMOR trains people in advanced sanity techniques?
It seems reasonably plausible that, for example, Harry’s argument with Dumbledore primes people toward “death is bad”. If they hang around long enough and read what LW has to say about cryonics, that priming tends some fraction of those people toward subscribing to cryonics, without them learning anything about e.g. Bayes’ law.
But I don’t know, I don’t know the numbers. What’s the readership of HPMOR versus Alcor’s 2011 signups?
I’ve been really impressed by the focused cross-pollination between transhumanism and rationality that I see at LW. I am not sure I would agree that increased individual rationality is the direct cause of increased cryonics signups because there are other explanations which seem more likely. As others have noted, this is a rare community where it is not weird, and is highly esteemed, to be signed up for cryonics.
And since humans are (at least in many situations) motivated by social factors more than abstract rational considerations, I expect the social factors to have more explanatory weight. That isn’t to say cryonics is not more rational than the alternative of no cryonics! More like this community is one that tries (i.e. individuals are rewarded for trying) to build its standards on rationality, and reject standards which aren’t, and cryonics is able to survive that process. If there were something grossly irrational or unethical about cryonics (as is commonly contended), it would not be able to survive very easily in the memesphere of lesswrong.
But this brings us back to the concept of “advanced” rationality. If you can a) keep your community continually pruned of bad ideas by shooting them down with the strongest logic available (and rewarding this behavior when it crops up), and b) let that community’s norms dominate your decisions when they are strongly rationally grounded, the outcome is that you will be a more rational person in terms of decisions made. This is not less valid from the perspective of “rationality = winning” than divorcing yourself from social impulses and expending loads of willpower to contradict the norm.
This is not less valid from the perspective of “rationality = winning” than divorcing yourself from social impulses and expending loads of willpower to contradict the norm.
It’s more valid! It’s why we have meet-ups, it’s why SingInst runs rationality camps that are highly desired and applied for!
It sometimes seems to me that many Lesswrongers seriously underestimate the degree to which they need to first persuade the skeptical to adopt transhumanism/singulatarianism more generally before cryonics is actually going to appear rational to them.
Revival from cryonics that involved growing a new biological body using the original DNA would have the broadest appeal, but accepting this conception of cryonics requires convincing people either a)that we are going to solve our topsoil and other issues that would actually allow us to feed the exploding biological population that would result from mass use of cryonics or b)people should stop having children, neither of which people are likely to accept unless they’re already inclined to singulatarianism (for a) or transhumanism (for b).
Revival from cryonics with a cybernetic body is going to seem less appealing to most people unless they’ve already been convinced that a number of things that are currently inherent in being human are not actually essential to their identity. Revival as an emulation faces the same problem to a vastly greater degree.
TL;DR version – Not accepting transhumanism might be irrational. Not accepting cryonics given that one is not already a transhumanist – not irrational. Lesswrongers should plan their outreach accordingly.
Revival from cryonics that involved growing a new biological body using the original DNA would have the broadest appeal,
We could just use organ printing to create a new body from the neck down. One of the scientists in this field mentions this as a possibility in an article he published in The Futurist a few years ago, though not in the context of a cryonics revival scenario:
I think this is a good point, but perhaps followers of Lesswrong are signing up for cryonics for basically the same reason ordinary people are not. i.e. it’s what high status members of their group do.
I was very surprised to see that too, to the point of questioning whether the result was real, but apparently it is. (The particular result is on page 10 — and possibly elsewhere, I haven’t read it through yet.)
Let’s be fair: that study was measuring the fraction of people that say they’d take an imaginary $500 over an imaginary 15% chance at an imaginary $1 million.
I doubt that most respondents were deliberately messing with the survey results, but I do think that people may use different decision-making resources for amusing hypotheticals vs. for the real world. E.g. the percentage of people getting the Wason Selection Task correct can jump from under 10% to over 70% when you change the task context from more abstract to more concrete. I suspect that for lots of people imaginary money counts as too abstract.
Assuming you weren’t joking, that doesn’t seem likely. The PDF Tesseract linked is about surveying college students, primarily, from elite institutions like Harvard, MIT, Princeton, or CMU. They are people one would especially expect to be making the expected value calculation and going with that.
A hostile AI falls way down the list of things I worry about. I would worry more about things the existence of which we can observe or infer now, like the February Eta Draconis comet.
I don’t worry about it that much either; I don’t expect anyone to build a recursively self-improving AI in my lifetime, and, well, the people who actually live in the future will probably be better at solving the problems of the future than I would be. “Friendly AI” falls under the category of problems that are “important, but not urgent”.
We know (1) comets exist, (2) they can impact planets, and (3) they leave debris trails which show up as meteor showers when these particles intersect with Earth’s atmosphere, like the February Eta Draconis meteor shower which has gotten astronomers’ attention.
By contrast, AI’s of the sort discussed here still exist in the realm of science fiction.
How is the science fiction angle relevant? What if they hadn’t been in science fiction?
It may be that the best cheap heuristic to use when evaluating the ideas at a very abstract level is to see if any similar events have happened in the past, or if similar events are written about in science fiction. However, it seems to me that further analysis should be deeper analysis.
An analogy: if, at a glance, I see that someone has an iphone, they are perhaps more likely than the average person in their demographic to have a MacBook (assume I have a study showing this to be the case). If I really want to know whether or not they have a MacBook, I will investigate their laptop rather than double-check to see if the phone is really an iphone or double-check the study.
If people here think they have looked into the matter, and seen that the iphone user’s laptop looks like a Gateway, and runs Windows like a Gateway, they will still be open to arguments at that finer grained level that the laptop is a disguised MacBook somehow. They will not be well disposed to probabilistic arguments from iphone ownership, since those have been screened off by thinking at a higher level of detail.
“Has it happened before?” is a fine first question when considering the likelihood of things, unless it is also one’s last question.
Well, the current policy on Xenu is for Eliezer to delete any comments that go into too much detail about him. Look up LessWrong on Rational Wiki if you really want to know.
Warning: According to EY simply knowing anything about this can have negative effects on your future up to and including being tortured by the FAI for all eternity, you have been warned.
This is a fantastically burdensome explanation for why people don’t sign up for cryonics. Do people who do sign up for cryonics usually have happier lives? (Not that I’ve heard of.) Do the same people who turn down cryonics turn down other forms of medical care? (Not that I’ve heard of.) If we found that people signing up for cryonics were less happy on average, would we be able to construct an equally plausible-sounding symmetrical argument that people with happy, fulfilled lives see no need for a second one? (Yes.)
I hate to go into psychologizing, but I suspect that Mike Darwin wants a grand narrative of Why, Oh Why Cryonics Fails, a grand narrative that makes sense of this shocking and incomprehensible fact and gives some info on what needs to be done to relieve the frustration.
The truth is that people aren’t anything like coherent enough to refuse cryonics for a reason like that.
Asking them about cryonics gets their prerecorded verbal behaviors about “immortality” which bear no relation whatsoever to their feelings about whether or not life is fun.
Remember the fraction of people that take $500 for certain over a 15% chance of $1 million? How could you possibly need any elaborate explanation of why they don’t sign up for cryonics? Risk-aversion, loss-aversion, ambiguity-aversion, status quo bias.
Cryonics sounds strange and not-of-our-tribe and they don’t see other people doing it, a feeling expressed in words as “weird”. It’s perceptually categorized as similar to religions or other scams they’ve heard about from the newspaper, based purely on surface features and without any reference to, or remediability by, the strength of the underlying logic; that’s never checked. Mike Darwin thinks that if you have better preservation techniques, people will sign up in droves, because right now they’re hearing about cryonics and rejecting it because the preservation techniques aren’t good enough. This is obviously merely false, and the sort of thing which makes me think that Mike Darwin needs a grand narrative which tells him what to do to solve the problem, the way that Aubrey de Grey thinks that good enough rejuvenation results in mice will grandly solve deathism.
I recently got a phone call saying that, if I recall correctly, around a quarter—or maybe it was half—of all Alcor’s cryonics signups this year, are originating from LW/Yudkowsky/rationality readers. If you want people to sign up for cryonics, the method with by far the strongest conversion ratio is to train them from scratch in advanced sanity techniques. Nothing else that cryonics advocates have tried, including TV ads, has ever actually worked. There’s no simple reason people don’t sign up, no grand narrative, nothing that makes sense of cryonicists’ frustration, people are just crazy in rather simple and standard ways. The only grand narrative for beating that is “soon, your annual signups will equal 10% of the people who’ve gone through a rationality bootcamp plus 1% of the people who’ve read both Eliezer’s nonfiction book and Harry Potter and the Methods of Rationality.”
Rationality Bootcamp and Advanced Sanity Techniques? The first things sane and rational people do, are to exercise due diligence in gathering the facts before they make crazy and unfounded public statements such as:
1) “I suspect that Mike Darwin wants a grand narrative of Why, Oh Why Cryonics Fails, a grand narrative that makes sense of this shocking and incomprehensible fact and gives some info on what needs to be done to relieve the frustration.” and
2) “Mike Darwin thinks that if you have better preservation techniques, people will sign up in droves, because right now they’re hearing about cryonics and rejecting it because the preservation techniques aren’t good enough.”
Really? Not only don’t I believe those things to be true, I’ve never said that they were. Au contraire, the only grand narrative of why people haven’t embraced cryonics in droves is a very complicated one which, onto 40 years later, I’m still learning about and struggling to fully understand. In 1981 I wrote an article (with Steve Bridge) entitled “The Bricks in the Wall” about the many reasons why people find it difficult to embrace cryonics: http://www.alcor.org/cryonics/cryonics8111.txt. If I recall correctly, there were at least a dozen reasons given in that essay, including things like loss of others, loss of self, lack of technical confidence, incompatible worldview, high social cost, fear of temporal displacement… Since that article was written, I’ve learned of many more reasons why people reject cryonics and why they don’t decide to opt for it—which, as it turns out, can be two very different things.
Ironically, much of my career in cryonics has been spent arguing against “the big idea,” “the grand solution,” “the magic bullet,” or “the single rich individual who will provide the solution to the problem of why cryonics has fared so poorly.” There is no single reason, unless you want to consider the myriad individual reasons, in aggregate, as a single cause of the failure. If you insist on that approach, then the best you will do (and you could do far worse) is to note that by any normal market standards, cryonics is a shitty product. It costs a lot, it is unproven, there are many commonplace reasons to believe that existing institutional structures have a poor chance of surviving long enough for the patients to be recovered, it has been plagued by legitimate scandals and failures and the constraints imposed by the existing medico-legal infrastructure mean that, statistically, you’ve got a ~30% chance of being autopsied, or otherwise so badly degraded that whoever it is that is recovered from the procedure isn’t very likely to be you (e.g., presumably if your DNA is intact a clone could be made). So cryonics doesn’t stack up very well as a normal market product.
Having said that, if you want to ‘sell’ cryonics as part of brainwashing package, or a religion, I’d be the first to say that it can probably done. It has been my observation that you can get people to do almost anything if you rob them of their will, and subvert their reason. For myself, I don’t think that’s a good idea.
As to the issue of improved preservation techniques causing people to sign up in droves, surely you jest? Any improvement in cryopreservation techniques short of fully reversible suspended animation will ‘only’ have an incremental effect. So for example, if organ cryopreservation for the kidney were achieved tomorrow, and organ banks for kidneys opened their doors 6 months later, I would indeed expect to see an increase in people opting for cryonics, but not a stampede.
Historically, the same was true of the introduction into cryonics of credible ideas for repairing cryoinjury and of scientific documentation that brain ultrastructure was surviving cryopreservation (under ideal conditions) reasonably well. Both of those advances widened the appeal of cryonics to a very small group of people. Nevertheless, they were significant, because if you have 40 members, and such advances give you 240, or a 1,040 - then that’s a huge benefit.
Finally, if reversible whole body suspended animation were developed tomorrow, the vast majority of people would still not opt for it. In fact, they more or less never would. What would have to happen first is that a relatively small cohort of the population who command respect, authority and power, would have to decide that it is in their interest to have suspended animation become a commonplace medical treatment. By this, I do not mean to imply some focused or intelligent cabal, or group of conspirators, but rather that all kinds of empowered people in many walks of life must be persuaded before the society at large will embrace cryonics. In other words, it will be a process and probably a complex one, before Mrs. Smith sits in her doctor’s office and is either offered, or asks about, suspended animation as a possible alternative to her ending up dead from her advanced ovarian cancer.
In my opinion there are no magic bullets. Rather, there are just a lot bricks in a large wall of opposition that have to be patiently worried away, one, or a few at a time. It’s all too easy to see TV coverage of the Berlin Wall coming down and say, “Jeeze, look how quick and easy that was!” Not. The back-story needs to be considered and in the case of cryonics that back-story has been unfolding for nearly fifty years—and there are still less than 2K people signed up worldwide.
Finally, it is indeed a cruel and unpleasant reality that life isn’t very rewarding for many people, and that it all but completely lacks the zest, joy and wonderful sense of adventure that can be seen in the eyes of any well cared for child. The biology of maturation and aging do much to drain away that sense of wonder and appetite for life. But it is much more likely the case that the way we lead our lives is the primary culprit. I recommend watching multiple episodes of a TV program called “Undercover Boss.” Just watch what people who work in factories, in offices, in laundries and in loo cleaning businesses do all day. It is horrible. It is, in fact, the exact opposite of the situation we DEMAND that children be in. Indeed, one of the most repellant things to people in the West is “child labor.” Well, if the normal workaday work is so horrible for children, what makes it good for adults? And if we propose to live for millennia, and longer, then don’t we, by definition, have to be as children: open, mobile, playful and exploring in our interaction with the world? I have done all kinds of jobs, from working at Mc Donald’s (2 years) to cleaning loos and dirty motel rooms. Work is a good and character building thing. But it can also be a corrosive and soul destroying thing that robs people of any strong desire to fight for life. Methinks that perhaps you need to work at McDonald’s dressing hamburger buns for a year or two.
Your hyperlink is broken, it has a period at the end of it.
The concern that lesswrong might be a cult has been dealt with extensively already.
Like it or not, lesswrong is likely one of the greatest allies cryonics has right now—and I would say this is not so much because of all the new recruits and fresh blood, but because of the training in rationality that it provides and ultimately injects into the cryonics community (among the other communities it intersects with). Because of this emphasis, lesswrong is actually pretty good insurance against cryonics becoming a cult.
I just read over my post, and I didn’t say (or imply) anything about lesswrong being a cult. I know almost nothing about lesswrong, beyond reading interesting posts here, from time to time, usually as a result of google searches. My proximate reason for posting here was that Gwern suggested I do so, and also pointed me specifically to this discussion. So I guess my question would be, “Why would anyone think that I would think lesswrong was a cult?”
My remarks about “selling cryonics as part of a cult” are long-standing ones, and go back to decisions that I and others consciously made about how we wanted to proceed back in the 1970s. Having been in a cult briefly from 1974-75, I have a good understanding of the social mechanics of breaking people down and rebuilding them in a way that is “more desirable” to whomever is doing the “human re-engineering.” There was not much question in my mind then or now that many people could be “converted” to cryonics by this expedient. The questions were about “should it be done?” Ironically, I got into that cult because the founders of Alcor thought that the “guru” running the operation would make cryonics a requirement for all of his adherents. -- Mike Darwin
Perhaps I got confused about what you were replying to exactly there.
My big issue with your post is that it seems to assume there are only two options that result in widespread adoption: sell it as a traditional product, or create an odious mind-control cult. What about the option of raising people’s sanity level so they can come to the conclusion on their own?
First, I should point out that I don’t believe the choices about how to increase success for cryonics are binary, as you lay them out above. While I don’t use the same language you do, my argument has been that it is not possible to get people to freely adopt cryonics in larger numbers, unless you change them, as opposed to trying to change cryonics, or how it is “marketed.”
You use the words “raising people’s sanity level” to describe the change you believe is necessary, before they are able to choose cryonics rationally. The dictionary definition of sanity is: “The ability to think and behave in a normal and rational manner; sound mental health.” I don’t know if that is the definition you are using, or not?
Depending upon how you define “rational,” “normal,” and “sound mental health,” we may be on the same page. I would say that most people currently operate with either contra-survival values, or effectively no values. Values are the core behavioral imperatives that individuals use in furtherance of their survival and their well being. It is easy to mistake these as being all about the individual, but in fact, they necessarily involve the whole community of individuals, because it is not (currently) possible for individual humans to survive without interaction with others. Beyond these baby steps at explanation, there is a lot that must be said, but clearly, not here and not now. What I’ve said here isn’t meant to be rigorous and complete, but rather to be exemplary of the position I hold (and that you asked me about).
It is also the case that not everyone has the biological machinery to make decisions at a very high level of thought or reasoning. And amongst those who do, arguably, few do so much of the time, especially in terms of epistemological questions (and none of us do it all of the time). That’s in part what culture is for. If we considered every decision in penultimate detail, we’d never get anything done. If the culture is bankrupt, then the situation is very bad, not just for survival of the individual, but for the civilization as a whole. So, you either fix that problem, or you don’t succeed with cryonics. Put another way, the failure of this culture to embrace cryonics and life extension is a symptom of the problem, rather than the primary problem itself. -- Mike Darwin
It’s partially a reference to this post.
What do you mean by this? You seem to imply that there are structural diffrences inherent in human brains that make some people capable of “a very high level of thought and reasoning” and some people incapable. That seems unlikely or even impossible: see The Psychological Unity of Humankind.
I do agree with you that some people do sometimes make decisions at a high level of thought and reasoning, and some people rarely or never do. Unless we’re talking about actual mental retardation, I think the differences would have to be mostly based on education and culture.
Only someone who hasn’t spent much time around people with 2-digit IQ’s would believe in “the psychological unity of humankind.” The empirical evidence shows that at least in the area of IQ or the General Intelligence Factor (g), marginal differences can have profound practical consequences:
Why g Matters: The Complexity of Everyday Life
http://www.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf
Nick Bostrom in one of his talks even argues that raising everyone’s IQ by 10 points would revolutionize our society for the better, not by making the smartest people a little bit smarter, but by making hundreds of millions of the world’s dumbasses substantially smarter so that they would become more educable, develop lower time preferences and make better decisions in life.
I looked at that sentence and thought “but people with 2-digit IQs make up 50% of the population! Surely I’ve spent plenty of time around them!” Then I read the article, and the description of people with IQs below 100% was surprising, to the point that I’m thinking maybe there’s been some sample bias in who I’m spending my time around. (Just because about 50% of the people in my high school had IQ’s below 100 doesn’t mean there were the ones taking physics and calculus with me, and although I’ve met people in nursing school who are abominable at things that seem obvious to me, like statistics, nursing probably requires fairly high intelligence, so my “unbiased sample” is probably still biased.)
The idea is unpleasant enough that I think I have some ideological bias against intelligence being that important. Probably because it seems unfair that something basically fixed in childhood and partly or mostly genetic (i.e. beyond the individual’s control and “not their fault”) should determine their life outcome. I don’t like the idea...but admitting that intelligence differences exist won’t make it any more awful.
It’s because what EY meant by the psychological unity of humankind was more along the lines of,
We might disagree about the last one, but the first four are pretty much fixed.
Do you refer to your time in the Galambosian cult?
http://rationalwiki.org/wiki/Galambosianism
BTW, according to Galambos’s beliefs about intellectual property, people owe me a royalty every time they use the word “singularitarian.”
I agree with almost all of what you say about no grand narrative and mostly just conformity, but I’m not willing to entirely dismiss this explanation as even a small part of the puzzle. It doesn’t seem much different than the theories that poor people with few life prospects have higher temporal discount rates and are more likely to engage in risky/criminal behavior because they have less to protect. People aren’t coherent enough to think “Well, stealing this watch has a small probability of landing me in prison, but my life now isn’t so satisfying, so I suppose it’s worth the risk, and I suppose it’s worth risking a lot later for a small gain now since I currently have so little”, but there’s some inner process that gives more or less that result.
If even the few people who get past the weirdness factor flinch away from the thought of actually being alive more, I expect that would make a significant difference.
I’m going to try a test question that might differentiate between “cryonics sounds weird” and “I don’t like life enough to want to live even more” on my blog. Obviously no one from here post on that since you already know where it’s going.
Alternate hypotheses: your followers are mostly technophile singularitarians, and technophile singularitarians are attracted to cryonics independently of rationalist training. Your followers believe there may be a positive singularity, which means the future has a reason to be much better than the present and avoid the unpleasantness Darwin describes in the article. Your followers are part of maybe the one community on earth, outside the cryonics community itself, where the highest-status figures are signed up for cryonics and people are often asked to justify why they have not done so. Your followers are part of a community where signing up for cryonics signals community affiliation. Your followers have actually heard the arguments in favor of cryonics and seen intelligent people take them seriously, which is more than 99.9% of people can say.
Judging by the experiment with the secretly identical question, I seem to have been wrong. Everyone says they would jump at the chance to be reincarnated, so lack of desire to live longer apparently doesn’t play as significant a role in cryonics refusal as I thought.
Your readers are still part of a contrarian cluster. (Hell, ciphergoth commented!) But I don’t dispute the result.
One of the reasons why I’d accept the angel’s offer but I haven’t signed up for cryonics is that in the former case I’d expect a much larger fraction of my friends to be alive when I’m resurrected.
So far, have you ever gone a thousand years without making new friends?
I answered yes to your hypothetical, but I am not currently signed up for cryonics and have no short- or medium-term plans to do so.
My reasons for the difference:
In your hypothetical, I’ve received a divine revelation that there’s no afterlife, and that reincarnation would be successful. In real life, I have a low estimate of the likelihood of cryonics leading to a successful revival and a low-but-nonzero estimate of the likelihood of an afterlife.
In your hypothetical, there’s no advance cost for the reincarnation option. For cryonics, the advance cost is substantial. My demand curve for life span is downward-sloping with respect to cost.
In your hypothetical, I’m on my deathbed. In real life, I’m 99.86% confident of living at least one more year and 50% confident of living at least another 50 years (based on Social Security life expectancy tables), before adjusting for my current health status and family history of longevity (both of which incline my life expectancy upwards relative to the tables), and before adjusting for expected technological improvements. This affects my decision concerning cryonics in two respects: a. Hyperbolic discounting. b. Declining marginal utility of lifespan. c. A substantial (in my estimation) chance that even without cryonics I’ll live long enough to benefit from the discovery of medical improvements that will make me immortal barring accidents, substantially reducing the expected benefit from cryonics.
In your hypothetical, I’m presented with a choice and it’s an equal effort to pick either one. To sign up for cryonics, I’d need to overcome substantial mental activation costs to research options and sign up for a plan. My instinct is to procrastinate.
Of course, none of this invalidates your hypothetical as a test of the hypothesis that people don’t sign up for cryonics because they don’t actually want to live longer.
I signed up as a result of reading Eliezer’s writings. I don’t think the first two points of your “alternate hypotheses” are really alternatives for me, since I only fall into either of those camps as a result of reading Eliezer.
I was about to comment there saying “I think I know what this is about, and if so he definitely means a younger healthy body rather than an 80-year-old one on the point of death”—but I thought I’d check here, and I’ll respect your preference for no cross-contamination. You might want to do that bit of disambiguation yourself.
Your LJ readers are probably not an entirely representative sample of people who aren’t signed up for cryonics, though perhaps they are of {people who aren’t signed up for cryonics but might be persuaded}.
Saw this after your post—guessed it was cryonics but didn’t spill the beans.
Same here.
Followers?
Your conclusion doesn’t follow from your premise. Moreover I don’t know what you mean by “advanced sanity techniques.” I agree that you’ve probably increased to number of cryonics signups substantially but I doubt that increased rationality has played a significant role.
The implication of the latter quote is that the sanity techniques are being applied, and cryonics is being signed up for largely because of its merits.
I think that the former quote captures more of what is going on. A community is being created in which cryonics isn’t as weird, removing previous barriers without implicating rationality directly.
I have a testable prediction that can partially parse out at least one factor. One disproportionately powerful influence on human beings in addition to (and mutually reinforcing) group think/behavior is accepting authority. (It is true that what others do is valid evidence for the validity of what they are doing, and is greater evidence the more the other(s) resemble(s) (an) optimal reasoning system(s) and is/are informed,)
I predict that if/as it becomes better known that Eliezer Yudkowsky signed up with the Cryonics Institute and not Alcor, the ratio of people signing up with Alcor and citing LW/HPATMOR to the people signing up with the Cryonics Institute and citing LW/HPATMOR will decrease.
“I think that the former quote captures more of what is going on. A community is being created in which cryonics isn’t as weird, removing previous barriers without implicating rationality directly.”
Very much so. People don’t actually believe in the future.
Unfortunately that has an element of truth in it. Cryonics now has a reputation has a paleo-future fad from the 1960′s, along with visions of space colonization, the postindustrial leisure society and the like. Many of the articles about Robert Ettinger’s recent suspension present that as a subtext in describing his career. For example. the Washington Post obit says:
http://www.washingtonpost.com/local/obituaries/from-phyics-teacher-to-founder-of-the-cryonics-movement/2011/07/24/gIQAupuIXI_story.html
With the implication that in our disillusioned era, Ettinger sounds like a crank and a fool.
I’m not sure that the intent was quite that harsh. “a crank and a fool” wasn’t in the original obit. To view Ettinger’s optimism as more in keeping with the zeitgeist of the 1960s than of the 2010s does not seem wholly unreasonable. Just in stark economic terms, U.S. real median household income peaked back in 1999. The median person in the U.S. has lost quite a lot over the last decade: income, security, access to health care, perhaps social status (as Vlaimir_M pointed out). It isn’t unreasonable of them to disbelieve in an improving future.
With all due respect, where’s the evidence that reading LW/HPMOR trains people in advanced sanity techniques?
It seems reasonably plausible that, for example, Harry’s argument with Dumbledore primes people toward “death is bad”. If they hang around long enough and read what LW has to say about cryonics, that priming tends some fraction of those people toward subscribing to cryonics, without them learning anything about e.g. Bayes’ law.
But I don’t know, I don’t know the numbers. What’s the readership of HPMOR versus Alcor’s 2011 signups?
I’ve been really impressed by the focused cross-pollination between transhumanism and rationality that I see at LW. I am not sure I would agree that increased individual rationality is the direct cause of increased cryonics signups because there are other explanations which seem more likely. As others have noted, this is a rare community where it is not weird, and is highly esteemed, to be signed up for cryonics.
And since humans are (at least in many situations) motivated by social factors more than abstract rational considerations, I expect the social factors to have more explanatory weight. That isn’t to say cryonics is not more rational than the alternative of no cryonics! More like this community is one that tries (i.e. individuals are rewarded for trying) to build its standards on rationality, and reject standards which aren’t, and cryonics is able to survive that process. If there were something grossly irrational or unethical about cryonics (as is commonly contended), it would not be able to survive very easily in the memesphere of lesswrong.
But this brings us back to the concept of “advanced” rationality. If you can a) keep your community continually pruned of bad ideas by shooting them down with the strongest logic available (and rewarding this behavior when it crops up), and b) let that community’s norms dominate your decisions when they are strongly rationally grounded, the outcome is that you will be a more rational person in terms of decisions made. This is not less valid from the perspective of “rationality = winning” than divorcing yourself from social impulses and expending loads of willpower to contradict the norm.
It’s more valid! It’s why we have meet-ups, it’s why SingInst runs rationality camps that are highly desired and applied for!
(Yes, I agree with you)
It sometimes seems to me that many Lesswrongers seriously underestimate the degree to which they need to first persuade the skeptical to adopt transhumanism/singulatarianism more generally before cryonics is actually going to appear rational to them.
Revival from cryonics that involved growing a new biological body using the original DNA would have the broadest appeal, but accepting this conception of cryonics requires convincing people either a)that we are going to solve our topsoil and other issues that would actually allow us to feed the exploding biological population that would result from mass use of cryonics or b)people should stop having children, neither of which people are likely to accept unless they’re already inclined to singulatarianism (for a) or transhumanism (for b).
Revival from cryonics with a cybernetic body is going to seem less appealing to most people unless they’ve already been convinced that a number of things that are currently inherent in being human are not actually essential to their identity. Revival as an emulation faces the same problem to a vastly greater degree.
TL;DR version – Not accepting transhumanism might be irrational. Not accepting cryonics given that one is not already a transhumanist – not irrational. Lesswrongers should plan their outreach accordingly.
We could just use organ printing to create a new body from the neck down. One of the scientists in this field mentions this as a possibility in an article he published in The Futurist a few years ago, though not in the context of a cryonics revival scenario:
http://sks.sirs.es.vrc.scoolaid.net/cgi-bin/hst-article-display?id=SNY5270-0-8423&artno=0000169222&type=ART&shfilter=U&key=Organs%20(Anatomy)&title=Beyond%20Cloning%3A%20Toward%20Human%20Printing&res=Y&ren=N&gov=Y&lnk=N&ic=N
I think this is a good point, but perhaps followers of Lesswrong are signing up for cryonics for basically the same reason ordinary people are not. i.e. it’s what high status members of their group do.
Wow. I don’t think I’d heard that one.
I was very surprised to see that too, to the point of questioning whether the result was real, but apparently it is. (The particular result is on page 10 — and possibly elsewhere, I haven’t read it through yet.)
Your link doesn’t work for me.
Let’s be fair: that study was measuring the fraction of people that say they’d take an imaginary $500 over an imaginary 15% chance at an imaginary $1 million.
I doubt that most respondents were deliberately messing with the survey results, but I do think that people may use different decision-making resources for amusing hypotheticals vs. for the real world. E.g. the percentage of people getting the Wason Selection Task correct can jump from under 10% to over 70% when you change the task context from more abstract to more concrete. I suspect that for lots of people imaginary money counts as too abstract.
I guess some folks could really use $500.
Assuming you weren’t joking, that doesn’t seem likely. The PDF Tesseract linked is about surveying college students, primarily, from elite institutions like Harvard, MIT, Princeton, or CMU. They are people one would especially expect to be making the expected value calculation and going with that.
In that case, let’s say I was joking ;)
Another painful statistic I ran into during some terrorism research: in investigating US Army personnel choosing between large lump sums and pensions ($25,000-$50,000 range): pg 48 of http://www.rau.ro/intranet/Aer/2001/9101/91010033.pdf
Enlisted personnel who were planning on leaving had a nominal discount rate of 57.2%.
How far up in “advanced sanity techniques” do I have to go before I become a Clear and can learn about Xenu and the body thetans?
We’ll tell you about them right now, if you want! ;)
A hostile AI falls way down the list of things I worry about. I would worry more about things the existence of which we can observe or infer now, like the February Eta Draconis comet.
I don’t worry about it that much either; I don’t expect anyone to build a recursively self-improving AI in my lifetime, and, well, the people who actually live in the future will probably be better at solving the problems of the future than I would be. “Friendly AI” falls under the category of problems that are “important, but not urgent”.
So you worry about things that we can infer exist now, but not those that we can infer could exist?
I am more used to seeing people dismiss AI by denying the validity of inference than by citing it, so you might want to elaborate on your perspective.
We know (1) comets exist, (2) they can impact planets, and (3) they leave debris trails which show up as meteor showers when these particles intersect with Earth’s atmosphere, like the February Eta Draconis meteor shower which has gotten astronomers’ attention.
By contrast, AI’s of the sort discussed here still exist in the realm of science fiction.
How is the science fiction angle relevant? What if they hadn’t been in science fiction?
It may be that the best cheap heuristic to use when evaluating the ideas at a very abstract level is to see if any similar events have happened in the past, or if similar events are written about in science fiction. However, it seems to me that further analysis should be deeper analysis.
An analogy: if, at a glance, I see that someone has an iphone, they are perhaps more likely than the average person in their demographic to have a MacBook (assume I have a study showing this to be the case). If I really want to know whether or not they have a MacBook, I will investigate their laptop rather than double-check to see if the phone is really an iphone or double-check the study.
If people here think they have looked into the matter, and seen that the iphone user’s laptop looks like a Gateway, and runs Windows like a Gateway, they will still be open to arguments at that finer grained level that the laptop is a disguised MacBook somehow. They will not be well disposed to probabilistic arguments from iphone ownership, since those have been screened off by thinking at a higher level of detail.
“Has it happened before?” is a fine first question when considering the likelihood of things, unless it is also one’s last question.
Well, the current policy on Xenu is for Eliezer to delete any comments that go into too much detail about him. Look up LessWrong on Rational Wiki if you really want to know.
Warning: According to EY simply knowing anything about this can have negative effects on your future up to and including being tortured by the FAI for all eternity, you have been warned.