You Only Live Twice
“It just so happens that your friend here is only mostly dead. There’s a big difference between mostly dead and all dead.”
-- The Princess Bride
My co-blogger Robin and I may disagree on how fast an AI can improve itself, but we agree on an issue that seems much simpler to us than that: At the point where the current legal and medical system gives up on a patient, they aren’t really dead.
Robin has already said much of what needs saying, but a few more points:
• Ben Best’s Cryonics FAQ, Alcor’s FAQ, Alcor FAQ for scientists, Scientists’ Open Letter on Cryonics
• I know more people who are planning to sign up for cryonics Real Soon Now than people who have actually signed up. I expect that more people have died while cryocrastinating than have actually been cryopreserved. If you’ve already decided this is a good idea, but you “haven’t gotten around to it”, sign up for cryonics NOW. I mean RIGHT NOW. Go to the website of Alcor or the Cryonics Institute and follow the instructions.
• Cryonics is usually funded through life insurance. The following conversation from an Overcoming Bias meetup is worth quoting:
Him: I’ve been thinking about signing up for cryonics when I’ve got enough money.
Me: Um… it doesn’t take all that much money.
Him: It doesn’t?
Me: Alcor is the high-priced high-quality organization, which is something like $500-$1000 in annual fees for the organization, I’m not sure how much. I’m young, so I’m signed up with the Cryonics Institute, which is $120/year for the membership. I pay $180/year for more insurance than I need—it’d be enough for Alcor too.
Him: That’s ridiculous.
Me: Yes.
Him: No, really, that’s ridiculous. If that’s true then my decision isn’t just determined, it’s overdetermined.
Me: Yes. And there’s around a thousand people worldwide [actually 1400] who are signed up for cryonics. Figure that at most a quarter of those did it for systematically rational reasons. That’s a high upper bound on the number of people on Earth who can reliably reach the right conclusion on massively overdetermined issues.
• Cryonics is not marketed well—or at all, really. There’s no salespeople who get commissions. There is no one to hold your hand through signing up, so you’re going to have to get the papers signed and notarized yourself. The closest thing out there might be Rudi Hoffman, who sells life insurance with cryonics-friendly insurance providers (I went through him).
• If you want to securely erase a hard drive, it’s not as easy as writing it over with zeroes. Sure, an “erased” hard drive like this won’t boot up your computer if you just plug it in again. But if the drive falls into the hands of a specialist with a scanning tunneling microscope, they can tell the difference between “this was a 0, overwritten by a 0″ and “this was a 1, overwritten by a 0”.
There are programs advertised to “securely erase” hard drives using many overwrites of 0s, 1s, and random data. But if you want to keep the secret on your hard drive secure against all possible future technologies that might ever be developed, then cover it with thermite and set it on fire. It’s the only way to be sure.
Pumping someone full of cryoprotectant and gradually lowering their temperature until they can be stored in liquid nitrogen is not a secure way to erase a person.
See also the information-theoretic criterion of death.
• You don’t have to buy what’s usually called the “patternist” philosophy of identity, to sign up for cryonics. After reading all the information off the brain, you could put the “same atoms” back into their old places.
• “Same atoms” is in scare quotes because our current physics prohibits particles from possessing individual identities. It’s a much stronger statement than “we can’t tell the particles apart with current measurements” and has to do with the notion of configuration spaces in quantum mechanics. This is a standard idea in QM, not an unusual woo-woo one—see this sequence on Overcoming Bias for a gentle introduction. Although patternism is not necessary to the cryonics thesis, we happen to live in a universe where “the same atoms” is physical nonsense.
There’s a number of intuitions we have in our brains for processing a world of distinct physical objects, built in from a very young age. These intuitions, which may say things like “If an object disappears, and then comes back, it isn’t the same object”, are tuned to our macroscopic world and generally don’t match up well with fundamental physics. Your identity is not like a little billiard ball that follows you around—there aren’t actually any billiard balls down there.
Separately and convergently, more abstract reasoning strongly suggests that “identity” should not be epiphenomenal; that is, you should not be able to change someone’s identity without changing any observable fact about them.
If you go through the aforementioned Overcoming Bias sequence, you should actually be able to see intuitively that successful cryonics preserves anything about you that is preserved by going to sleep at night and waking up the next morning.
• Cryonics, to me, makes two statements.
The first statement is about systematically valuing human life. It’s bad when a pretty young white girl goes missing somewhere in America. But when 800,000 Africans get murdered in Rwanda, that gets 1⁄134 the media coverage of the Michael Jackson trial. It’s sad, to be sure, but no cause for emotional alarm. When brown people die, that’s all part of the plan—as a smiling man once said.
Cryonicists are people who’ve decided that their deaths, and the deaths of their friends and family and the rest of the human species, are not part of the plan.
I’ve met one or two Randian-type “selfish” cryonicists, but they aren’t a majority. Most people who sign up for cryonics wish that everyone would sign up for cryonics.
The second statement is that you have at least a little hope in the future. Not faith, not blind hope, not irrational hope—just, any hope at all.
I was once at a table with Ralph Merkle, talking about how to market cryonics if anyone ever gets around to marketing it, and Ralph suggested a group of people in a restaurant, having a party; and the camera pulls back, and moves outside the window, and the restaurant is on the Moon. Tagline: “Wouldn’t you want to be there?”
If you look back at, say, the Middle Ages, things were worse then. I’d rather live here then there. I have hope that humanity will move forward further, and that’s something that I want to see.
And I hope that the idea that people are disposable, and that their deaths are part of the plan, is something that fades out of the Future.
Once upon a time, infant deaths were part of the plan, and now they’re not. Once upon a time, slavery was part of the plan, and now it’s not. Once upon a time, dying at thirty was part of the plan, and now it’s not. That’s a psychological shift, not just an increase in living standards. Our era doesn’t value human life with perfect consistency—but the value of human life is higher than it once was.
We have a concept of what a medieval peasant should have had, the dignity with which they should have been treated, that is higher than what they would have thought to ask for themselves.
If no one in the future cares enough to save people who can be saved… well. In cryonics there is an element of taking responsibility for the Future. You may be around to reap what your era has sown. It is not just my hope that the Future be a better place; it is my responsibility. If I thought that we were on track to a Future where no one cares about human life, and lives that could easily be saved are just thrown away—then I would try to change that. Not everything worth doing is easy.
Not signing up for cryonics—what does that say? That you’ve lost hope in the future. That you’ve lost your will to live. That you’ve stopped believing that human life, and your own life, is something of value.
This can be a painful world we live in, and the media is always telling us how much worse it will get. If you spend enough time not looking forward to the next day, it damages you, after a while. You lose your ability to hope. Try telling someone already grown old to sign up for cryonics, and they’ll tell you that they don’t want to be old forever—that they’re tired. If you try to explain to someone already grown old, that the nanotechnology to revive a cryonics patient is sufficiently advanced that reversing aging is almost trivial by comparison… then it’s not something they can imagine on an emotional level, no matter what they believe or don’t believe about future technology. They can’t imagine not being tired. I think that’s true of a lot of people in this world. If you’ve been hurt enough, you can no longer imagine healing.
But things really were a lot worse in the Middle Ages. And they really are a lot better now. Maybe humanity isn’t doomed. The Future could be something that’s worth seeing, worth living in. And it may have a concept of sentient dignity that values your life more than you dare to value yourself.
On behalf of the Future, then—please ask for a little more for yourself. More than death. It really… isn’t being selfish. I want you to live. I think that the Future will want you to live. That if you let yourself die, people who aren’t even born yet will be sad for the irreplaceable thing that was lost.
So please, live.
My brother didn’t. My grandparents won’t. But everything we can hold back from the Reaper, even a single life, is precious.
If other people want you to live, then it’s not just you doing something selfish and unforgivable, right?
So I’m saying it to you.
I want you to live.
- What I’ve learned from Less Wrong by 20 Nov 2010 12:47 UTC; 113 points) (
- Incremental Progress and the Valley by 4 Apr 2009 16:42 UTC; 97 points) (
- An Outside View on Less Wrong’s Advice by 7 Jul 2011 4:46 UTC; 84 points) (
- Abnormal Cryonics by 26 May 2010 7:43 UTC; 79 points) (
- Complex Novelty by 20 Dec 2008 0:31 UTC; 47 points) (
- The Thing That I Protect by 7 Feb 2009 19:18 UTC; 46 points) (
- Cryonics Cost-Benefit Analysis by 3 Aug 2020 17:30 UTC; 46 points) (
- For The People Who Are Still Alive by 14 Dec 2008 17:13 UTC; 45 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- 6 Mar 2013 19:52 UTC; 33 points) 's comment on Open Thread, March 1-15, 2013 by (
- 20 Apr 2014 0:56 UTC; 28 points) 's comment on Open Thread April 16 - April 22, 2014 by (
- This Didn’t Have To Happen by 23 Apr 2009 19:07 UTC; 27 points) (
- 28 Nov 2010 19:15 UTC; 24 points) 's comment on Yes, a blog. by (
- The Uses of Fun (Theory) by 2 Jan 2009 20:30 UTC; 23 points) (
- 18 Feb 2011 21:09 UTC; 20 points) 's comment on Cryonics and Pascal’s wager by (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- Where to absolutely start? by 21 Oct 2019 2:45 UTC; 16 points) (
- [Help]: Social cost of cryonics? by 11 Sep 2011 19:26 UTC; 14 points) (
- Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques by 17 Oct 2013 20:36 UTC; 12 points) (
- An observation on cryocrastination by 22 Jul 2009 20:41 UTC; 12 points) (
- Against Cryonics & For Cost-Effective Charity by 10 Aug 2010 3:59 UTC; 9 points) (
- 27 Jul 2011 9:54 UTC; 7 points) 's comment on Robert Ettinger, founder of cryonics, now CIs 106th patient by (
- [SEQ RERUN] You Only Live Twice by 2 Jan 2013 6:00 UTC; 6 points) (
- 21 Jan 2010 1:51 UTC; 6 points) 's comment on Normal Cryonics by (
- 20 Apr 2012 5:09 UTC; 6 points) 's comment on Particles break light-speed limit? by (
- 22 Jan 2010 5:37 UTC; 5 points) 's comment on That Magical Click by (
- 26 Sep 2011 14:45 UTC; 4 points) 's comment on Convincing my dad to sign up for Alcor: Advice? by (
- 18 Apr 2013 22:47 UTC; 4 points) 's comment on Being Half-Rational About Pascal’s Wager is Even Worse by (
- 20 Jan 2010 18:00 UTC; 4 points) 's comment on Normal Cryonics by (
- 26 Jul 2011 6:55 UTC; 3 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- Death—an essay by 2 Feb 2017 17:25 UTC; 3 points) (
- Transhumanism 5 - LW/ACX Meetup #241 (Wednesday, June 14th 2023) by 13 Jun 2023 17:22 UTC; 2 points) (
- 10 Apr 2015 0:35 UTC; 2 points) 's comment on On not getting a job as an option by (
- 19 Jan 2010 22:53 UTC; 2 points) 's comment on Normal Cryonics by (
- Transhumanism 6 - LW/ACX Meetup #242 (Wednesday, June 21st 2023) by 20 Jun 2023 18:26 UTC; 2 points) (
- 23 Apr 2012 3:00 UTC; 2 points) 's comment on Stupid Questions Open Thread Round 2 by (
- 18 Jan 2014 21:28 UTC; 2 points) 's comment on Cryonics As Untested Medical Procedure by (
- 8 Jan 2010 4:27 UTC; 2 points) 's comment on Reference class of the unclassreferenceable by (
- Death—an essay by 2 Feb 2017 17:01 UTC; 1 point) (
- 3 Sep 2013 4:51 UTC; 1 point) 's comment on True Optimisation by (
- 24 Oct 2013 15:33 UTC; 1 point) 's comment on What Can We Learn About Human Psychology from Christian Apologetics? by (
- 20 Jan 2011 19:13 UTC; 1 point) 's comment on Theists are wrong; is theism? by (
- 17 Mar 2015 22:14 UTC; 1 point) 's comment on Open thread, Mar. 16 - Mar. 22, 2015 by (
- 14 May 2013 3:20 UTC; 1 point) 's comment on Open Thread, May 1-14, 2013 by (
- 14 Nov 2021 4:04 UTC; 0 points) 's comment on Discussion with Eliezer Yudkowsky on AGI interventions by (
- 28 May 2010 21:11 UTC; 0 points) 's comment on Abnormal Cryonics by (
- 20 Apr 2014 0:41 UTC; 0 points) 's comment on Open Thread April 16 - April 22, 2014 by (
- 20 Oct 2010 22:26 UTC; 0 points) 's comment on LW favorites by (
- Less Wrong meets… by 12 Jul 2011 23:08 UTC; -11 points) (
“I pay $180/year for more insurance than I need—it’d be enough for Alcor too.” Sorry, mind rephrasing? I’ve read that statement several times, and I just don’t follow it.
Also, CI doesn’t do neuro, just whole body preservation, right? And Alcor’s membership fees are independent of whether one’s signed up as a neuro or whole body patient? (Near as I can tell from looking over their site, that’s sadly the case.) (Just trying to decode all the relevant things to see if I actually can sign up right now after all. I want to.)
I have $250K of life insurance of which only $50K is needed for CI, and only $120K (I think) would be needed for Alcor.
Oooh, okay, thanks.
Can you point me to any positive evidence that the information needed for resuscitation survives death and freezing, rather than being carried in volatile state?
Without that, it seems to me that your argument boils down to “you can’t prove it won’t work.” Which is true, but not much of an inducement to part with cash.
I understand this is from ages ago but is worth a response. See the Wiki page on Deep hypothermic circulatory arrest (a procedure used in some surgeries today):
The existence and success of this procedure seems like incredibly strong evidence in favor of people having a purely chemical identity stored in their head. When timely applied and non-lossy preservation techniques (which I consider modern cryonics to be) are used, you should be able to be successfully re-animated.
There’s a bit of a difference, as I’m sure you’re aware, between being refrigerated for seven minutes and frozen for a decade or more. Proteins denature, lipid membranes break down, cells and tissues are destroyed either by expanding liquids or toxic antifreeze compounds.
Some of that damage might be reversible. Much could well be to fungible parts of the body; having to replace an ear or a spleen shouldn’t overly impact whether the reanimated person is you or not. But in the absence of knowledge about how / where “chemical identity” is stored in the brain and how vulnerable those systems are to damage (much less how one would go about putting the bits back together), it is preposterous to make a definite claim that cryonics is reversible.
Some damage cannot be reversed, some information cannot be recovered/decrypted within the time until heat death, and there is not sufficient evidence to believe that a frozen brain is any less ‘erased’ than a letter which has been dissolved in acid.
There is, in fact, ridiculously good evidence to think that more information is preserved by cryonics than a letter dissolved in acid. The incredibly important question is 1) How much information is preserved, and 2) whether it is the right information.
This.
I agree that a future world with currently-existing people still living in it is more valuable than one with an equal number of newly-created people living in it after the currently-existing people died, but to show that cryonics is a utilitarian duty you’d need to show not just that this is a factor but that it’s an important enough factor to outweigh whatever people are sacrificing for cryonics (normalcy capital!). Lots of people are dead already so whether any single person lives to see the future can constitute at most a tiny part of the future’s value.
By signing up for cryonics, do I increase the probability that I am a simulation of history by a post-singularity entity?
I have a sever inability to make big choices such as this, and I have cryocrastinated for quite some time. This year, I became a vegetarian after a lot of difficult reflection, and I doing the same with cryonics.
I feel that there just isn’t that much to lose by not signing up, since non-existence does not scare me. Signing up, at that point, becomes a choice between the Precautionary Principle vs Proactionary Principle. Even a small chance that the world I wake up in will be horrible is enough to not want to sign up at all, even despite the potential gain.
Your pleas were very heartfelt. I am leading a healthy lifestyle, because I do wish to be around to experience the future and help make the world a better place. I find the odds of it actually working to be low, but if the expected utility is very high, then even a small investment is worth it. I am quite risk-averse though, even though this is illogical. I will re-read that article about Crisis of Faith, and hopefully come to a rational conclusion.
But if the drive falls into the hands of a specialist with a scanning tunneling microscope, they can tell the difference between “this was a 0, overwritten by a 0” and “this was a 1, overwritten by a 0″.
Not really true.
They can tell that there were various 1s and 0s—but telling what order they were in is impossible (“data written to the disk prior to the data whose recovery is sought will interfere with recovery just as must as data written after—the STM microscope can’t tell the order in which which magnetic moments are created”).
Not to mention that reading bits with a STEM takes so long as to be pointless (“it would take more than a year to scan a single platter with recent MFM technology, and tens of terabytes of image data would have to be processed”) - and that’s pretty “secure”, in combination, for any plausible meaning of the term.
However, more on topic, you said Pumping someone full of cryoprotectant and gradually lowering their temperature until they can be stored in liquid nitrogen is not a secure way to erase a person”.
Not at the deepest theoretical level, perhaps (though perhaps not—I don’t see any reason to assume that the cryonics process might not in fact be destroying the patterns enough to make there be no information to recover by any means that turns out to actually be possible in the future; remember we know only that it preserves “remarkable fidelity” in the fine structure… we have no idea if that’s sufficient fidelity).
However, Iit’s secure enough a way to “delete” them if for whatever reason they never get thawed out other than to throw their bodies away.
What about the possibility (probability?) of a sufficient economic downturn or failure of the company before the technology exists to “restore” the preserved “dead”, even if we ignore the possibility that current cryopreservation might simply not be preserving well enough?
Alcor is honest about previous thawing events (at other projects), and is also honest enough to promise only that their investments are the most sound they can make. A future great depression sounds a lot like a great time for a thaw, to me—if the bonds tank, nobody’s buying liquid nitrogen by the truckfull.
The conclusion that the calculus must come out in favor of cryogenic preservation (rather than, say, investing one’s money in either one’s living family or some productive trust, if one cares about “the future”) seems unsupportable.
I agree that one can honestly and rationally make a choice in its favor, but this post reads more like an attempt at religious conversion to cryo-mania than anything else.
Call me back when a creature has been cyropreserved and then fully restored, and we can use the language of certainty, and talk in terms of “believing in the future”.
Well, it seems like many wives don’t agree with you two. That article really surprises me though; did the Significant Others of any to-be frozen people on here express significant hostility towards cryonics?
I have a suspicion that at least part of the Hostile Wife phenomenon comes from an asymmetry in the way people deal with hostile Significant Others. It’s possible that while men with hostile wives will sign up anyway, or make public the reason why they haven’t, women with hostile husbands will just forget about cryonics. Does anyone know how to gather evidence on this one way or another?
I have signed up with Alcor. When I suggest to other people that they should sign up the common response has been that they wouldn’t want to be brought back to life after they died.
I don’t understand this response. I’m almost certain that if most of these people found out they had cancer and would die unless they got a treatment and (1) with the treatment they would have only a 20% chance of survival, (2) the treatment would be very painful, (3) the treatment would be very expensive, and (4) if the treatment worked they would be unhealthy for the rest of their lives; then almost all of these cryonics rejectors would take the treatment.
One of the primary cost of cryonics is the “you seem insane tax” one has to pay if people find out you have signed up. Posts like this will hopefully reduce the cryonics insanity tax.
It’s painful, expensive, leaves you in ill health the rest of your (shortened) life, and you’ve only got a 20% chance?
Why would someone take that deal?
This is more than slightly odd. I am considering cryonics but I would never take that cancer treatment. It seems like a horrible deal .
I find the idea of cryonics having a 20% chance of working to be orders of magnitude too optimistic.
I actually had a nightmare recently where I was diagnosed with an aggressive cancer and would have preferred not to go through treatment, but felt pressured by other, more aggressively anti-death members of the rationality community. Was afraid people would think I didn’t care about them if I didn’t try to stay alive longer to be with them, etc. (I’m an ICU nurse; I have a pretty good S1 handle on how horrific a lot of life saving treatments are, and how much quality of life it’s possible to lose.)
I’ve thought about cryonics, but haven’t made a decision either way; right now, my feeling is that I don’t have anything against the principle, but that it doesn’t seem likely enough to work for the cost-benefit analysis to come out positive.
Can you describe the reasons are that make you think it is not likely enough to work? Totally understandable if you can’t articulate such reasons, but I’m just curious about what the benchmarks are that you might find useful in informing your probability estimate.
That is to say, it’s unlikely that actual reversible cryopreservation would be possible; if it were, the technique probably wouldn’t be called cryonics anymore. So, other more intermediate steps that’d you’d find informative might be good to know about.
I’m confused. What is the relationship between Alcor and the Cryonics Institute? Is it either-or? What is the purpose of yearly fees to them if you can just take out insurance which will cover all the costs in the event of your death?
Eliezer, although you and Robin agree on the general principle, Robin has signed up with Alcor, while you have signed up with CI. (Despite the fact that you say you could afford Alcor also.) How much of a disagreement is this, and what does it reflect?
More generally, how should one rationally approach this decision?
I’m curious about a couple of things.
If this is a rational choice, why does Robin jeopardize his future driving around a convertible and if you cannot be frozen and also donate organs, how do you justify it morally?
Call me back when a creature has been cyropreserved and then fully restored, and we can use the language of certainty, and talk in terms of “believing in the future”.
You can do better than that, for example, what if you die and after a X years, people are routinely reanimated and live healthy lives at whatever age they wish? You would feel like Mr Silly then, if you were alive at least you would.
If you wait for being able to talk about something “in the language of certainty” then you also advocate ignoring existential risks, as when they happen, it is all over. Is this very rational?
There are ways if you feel like using your brain to get close to “certainty”(defined as the probability of occurrence being above some number between 0 and 1) belief in some event occurring without observing it occurring. Science is not fast after all.
Steve,
A life insurance policy for 50k-120k could be used to save dozens to hundreds of lives funding medical services in Africa (http://www.givewell.net/PSI), or to reduce existential risk.
The use of the financial argument against cryonics is absurd.
Even if the probability of being revived is sub-1%, it is worth every penny since the consequence is immortality (or at least another chance at life). If you don’t sign up, your probability of revival is 0% (barring a “The Light of Other Days” scenario) and the consequence is death—for eternity.
By running a simple risk analysis, the choice is obvious.
The only scenario where a financial argument makes sense is if you’re shortening your life by spending more than you can afford, or if spending money on cryonics prevents you from buying some future tech that would save your life.
What if I am facing death and have an estate in the low six figures, and I can afford one cryonic journey to the future, or my grandchildren’s education plus, say, charitable donations enough to save 100 young children who might otherwise live well into a lovely post-Singularity world that would include life extension, uploading, and so on? Would that be covered under “can’t afford it”? If my personal survival is just not that high a priority to me (compared to what seem to me much better uses of my limited funds) does that mean I’m ipso facto irrational in your book, so my argument ‘doesn’t make sense’?
I do think cryonics is a very interesting technology for saving the data stored in biological human bodies that might otherwise be lost to history, but that investing in a micro-bank or The Heifer Project might have greater marginal utility in terms of getting more human minds and their contents “over the hump” into the post-singularity world many of us hope for. I just don’t see why the fact that it’s /me/ matters.
What if the choice is “use my legacy cash to cryopreserve a few humans chosen at random” versus “donate same money to help preserve a whole village worth of young people in danger who can reasonably be expected to live past the Singularity if they can get past the gauntlet of childhood diseases” (the Bill Gates approach) to “preserve a lovely sampling of as many endangered species as seems feasible”. I would argue that any of these scenarios would make sense.
Also, I think that people relying on cryo would do well to lifeblog as much as possible, I think continuous video footage from inside the home and some vigorous diary type writing or recording might be a huge help in reconstructing a personality in addition to some inevitably fuzzy measurements of some exact values of positions of microtubules in frozen neurons and the like. It would at give future builders of human emulations a baseline to check how good their emulations were. Is this a well known strategy? I cannot recall seeing it discussed, but it seems obvious.
You may be interested, at the moment, in donating to the Brain Preservation Foundation.
I have personally found arguments that cryonics actually works (with significant probability) unconvincing, so that’s what I do.
Carl, why say that about cryonics funding in particular rather than money spent on going to the movies? Also, anything to do with Africa has to be extremely carefully targeted or it ends up being worse than useless—actively harmful—this should always be mentioned in the same sentence, since Africa has been actively harmed by most aid money spent there.
Sufficient popularity of cryonics, if the world lasts that long, would benefit a very large number of people. African aid couldn’t compete, only existential risk mitigation could.
I’m willing to accept such a reply from people who (a) don’t go to the movies and (b) spend a large fraction of their disposable income on existential risk mitigation, but not otherwise.
burger flipper, making one decision that increases your average statistical lifespan (signing up for cryonics) does not compel you to trade off every other joy of living in favor of further increases. and, if the hospital or government or whoever can’t be bothered to wait for my organs until i am done with them, that’s their problem not mine.
The number of people living today because their ancestors invested their money in themselves/their status and their children, all of us:
The number of people living today because they or someone else invested their money in cryonics or other scheme to live forever, 0.
Not saying that things won’t change in the future, but there is a tremendously strong bias to spend your resources on ambulatory people and new people, because that has been what has worked previously.
Women might have stronger instincts in this respect as they have been more strongly selected for the ability to care for their children (unlike men).
If you want to change this state of affairs, swiftly at least, you have to tap into our common psyche as successful replicators and have it pass the “useful for fitness test”. This would be as easy as making it fashionable or a symbol of high status, get Obama to sign up publicly and I think you would see a lot more interest.
High status has been something sort after because it gets you better mates and more of them (perhaps illicitly).
Maybe this is why I find it so hard to be emotionally troubled by the thought of dying, as long as my children survive. (As long as anyone survives, really, but the thought of children is more emotionally compelling.)
Even though the decision is overdetermined, I’ve been cryocrastinating. I’ll schedule it with more urgency.
I’d like to sincerely thank Eliezer and Robin for their encouragement to sign up for cryonics. Although I haven’t finalized my life insurance arrangements, I’m in that process. It took me well under a year from hearing a serious argument for cryonics for me to apply, so I find it pretty disheartening when I hear stories about people taking far longer to decide. I’m only 18 and don’t have a lot of good sources of income, but cryonics is cheap and one of the best decisions I’ve ever made.
I’m signed up, and I consider it one of my better decisions.
I use ACS, for what it’s worth, which hasn’t been mentioned here that I’ve seen.
-Robin
Personally I don’t consider it to say anything much—since that’s some 99.9% of humanity. What could so many folk possibly have in common—besides their humanity?
For me, signing up for cryonics indicates a bizarre world view—very different from my own—and perhaps suceptability to a particular type of con job.
Bambi, I’ll grant you that eating your vegetables and smoking aren’t mutually exclusive, but I do wonder about the rationality of a smoker who makes certain to take their vitamins daily.
And as to the organs, I was thinking more of the potential recipients’ suffering and not that of the hospitals.
Even if the probability of being revived is sub-1%, it is worth every penny since the consequence is immortality
By that logic, one should pay to have prayers said for one’s soul.
One could make a Drake’s-Equation-style estimate of that “sub-1%” probability, but the dominant term is this: what are the odds that evolution, with no selection pressure whatsoever, has designed the brain so that that none of its contents are stored in a volatile way? Why write everything to disk if the computer never gets turned off?
Without hard evidence that the brain does that, I don’t see any reason to rate the probability of revival significantly higher than zero. That’s without even getting into whether it’s really practical to extract what information there is.
Maybe there is such evidence and I just haven’t seen it. I repeat: can anyone point me to some?
Well, there’s the fact that people have lots of seizures, which as far as we can tell are very chaotic patterns of electrical activity that scramble all information contained in ongoing oscillatory patterns. (Note the failure of spike sorting algorithms upon recruitment of neurons into seizure activity. http://m.brain.oxfordjournals.org/content/early/2015/07/17/brain.awv208.abstract) Not only that, but TMS (https://en.m.wikipedia.org/wiki/Transcranial_magnetic_stimulation—effectively introducing large random currents in large chunks of brain tissue) doesn’t seem to produce any long term effects as long as you don’t start actually causing tissue damage through hydrolysis.
On the molecular side, we know that our core personality is resilient to temporary flooding of the brain with a large array of different transmitter analogs, antagonists, and other chemicals. (All of the drugs that people do) Many of these chemicals are synthetic ones that we didn’t co-evolve with.
I think it’s very reasonable to suspect that most of the important information that composes the individual is stored in genetic regulatory networks, and in the connectome. Chemical gradients aren’t very information dense, and while we might a priori expect there to be a lot of information in ephemeral electrical activity, I think seizures and TMS are both good demonstrations that this information can at least be restarted given the structure of the network.
Final thing to consider: there’s much more individual variation at the level of anatomy than there is at the level of electrophysiological properties. There are a relatively small number of morphological categories of neurons (100s), that are fairly stereotyped across humans. But brain anatomy varies enormously from subject to subject. (Take into account that as a Cognitive neuroscientist, I’m probably biased in this regard)
There’s still some missing pieces, like working memory CAN’T be stored in the connectome because plasticity mechanisms and genetic mechanisms aren’t fast enough.
At the very least though, I think there’s a lot of hope. After all, the connectome and genetic information can be well preserved even with plasticization and slicing. My money’s on those being the critical pieces of information.
The value of immortality does not seem infinite to me. Merely very large. The odds that magic or religion will save you seem vastly tiny. Sufficiently tiny that they are bad uses of time and energy even if the benefits are potentially very large.
If you’re looking for rationalizations for not giving into Pascal’s Wager here, a better one might be “If I wanted to maximize my chance at immortality, paying 100$ for prayers is less effective than investing 100$ into cryonics.”
You can only “invest $100” in cryonics by buying an insurance policy with a $100 premium that covers a very short period, where the chance of immortality is the probability that cryonics works multipled by the probability that you will die during the exact period covered by the premium before you have to pay a second premium. Because the chance that you will die during the period is non-zero, the return on the investment is also non-zero. However, the overhead for this investment is huge (and bear in mind that overhead includes such things as “everyone thinks you’re crazy for making a single payment that only returns anything if you die within the week.”)
Furthermore, what does it even mean to say “this instance of Pascal’s Mugging maximizes my return, over several instances of Pascal’s mugging”? If it’s an instance of Pascal’s mugging, the return is useless information and maximizing it is meaningless.
Ah, desert-dryness of speech: capable of making even immortality sound boring and unappealing!
I don’t think it takes an degree in nano-tech or cutting edge medicine to be more confident in the power of future technology than in the power of praying for souls. Even if it is granted that there aren’t great reasons for supposing cryonic preservation is viable, it is a huge and unwarranted leap to say that is as intellectually vacuous as the ideas of prayers affecting souls.
Spare me the dramatics!
I continue to not understand the economics of reviving people in the future. Your argument here seems to be that reviving frozen heads, no matter the cost, is a moral obligation. That does not make sense to me.
Thought experiment: tomorrow, John Q. Scientist reveals that he can, for the cost of $1 million, revive any person who has been cryogenically frozen. Say 1000 people are frozen cryogenically in an acceptable state right now. Do we revive them? Why? What if they will only get (maybe) another year? 5 years? 10 years? Who pays for it? What if it’s $100 million?
The only people I imagine willing to pay for the operation are loved ones. Very rich loved ones. And in a large portion of the scenarios I imagine, there’s at least a few generations between yourself and the technology to defrost people. Who will pay when there’s no remaining loved ones? Is it a moral responsibility to spend the money? Why?
1) Why do you think a revival will remain prohibitively expensive forever?
2) If you’ve got no living relatives, then one reason for revival could be commercial. A company could simply revive you for a certain cost and then you have to pay them back in the long run.
The latter couldn’t even be argued to be a forced contract without consent. Considering that you were taking the trouble of freezing yourself, it can safely be assumed that you’d be more than happy and willing to pay for the cost of your revival in the same way you’d pay off any ordinary debt. Hell, I should own that company.
I might pose I similar thought experiment: if a scientist today, discovered he could raise the dead, restore anyone who had ever lived, what would we do with that power? Do we have a moral responsibility to “save” all humans ever? Even if resurrection were free, the earth couldn’t (currently) support a population of every human (and perhaps some pets?) who’s ever been. We’d have to decide who gets to live and who doesn’t. Restoring past-people will almost certainly entail displacing some people who might otherwise have been born. Why do we privilege those that already got to live a “full” (typical human) life over the millions of potential humans that could populate the earth in our stead?
Furthermore, I don’t see much of a distinction between deciding who gets revived and who doesn’t, on the one hand, and killing the people we don’t want around, on the other. Faced with a delemia of “who gets to live”, unless we aim for a sort of “equality of time alive”, out of a sense of fairness (in which case, most modern humans are running a deficit), it seems we would kill the ass-holes to make room for the cool people from history. Is that inhumane?
Or consider, maybe we’d stop giving birth entirely, so that all the existent people can take turns being the one’s alive. Does a world where every person is old, where no one is falling in love for the first time, where children are absent, so that we can have more life, see like a good one?
I’m asking these questions sincerely. Maybe that is the world we want.
I’d expect the answer to be similar to an analogous situation involving birth. If everyone had more children than they could afford to raise, society would collapse. We like to think that since the children are not responsible for their situation, we as a society would choose to support them, but this only is possible because the number of people who have children and demand that society support them is limited. At some point the drain on resources would make it impossible to support them as a society, and we would have to let them starve, and/or not permit immigrants from countries with high birth rates.
The same would go for resurrection. If you resurrect someone, you are responsible for supporting them for a maximum of 18 years and a minimum that depends on how long they are dead (so you’re not on the hook for 18 years if you resurrect someone who died last week). If you resurrect more people than you can afford to support, this is treated like having more children than you can afford to support; the resurrected will have to live in poverty or starve. There will be a safety net to help some of them but it will be imperfect and it may not be possible to help them all. And of course you don’t allow immigration from countries who like resurrecting lots of people and sending them across the border to take advantage of our social services.
If it is significantly easier to resurrect than to have children, we may need to have penalties that we wouldn’t tolerate in the case of children, such as arresting people if they resurrect more than X others and do not support them, something we currently do only for child support cases.
Eliezer, well written! :)
Grant, yes.
Burger I think you overestimate the effect of agreeing to be an organ donor.
with no selection pressure whatsoever, has designed the brain so that that none of its contents are stored in a volatile way?
Well, people exposed to very low temperatures have ended up in states where they were considered clinically dead, and then revived at least up to an hour later, with the cold preserving their brain even at a point where there was no blood circulation. ( http://news.bbc.co.uk/2/hi/health/620609.stm for one example.) AFAIK, their brain worked just fine afterwards, even though “volatile” functions had been interrupted (but I’m under the impression that there may have been a minor amnesia of the moments just before falling unconscious). Also, lower mammals have been frozen and brought back with no ill effects.
Why write everything to disk if the computer never gets turned off?
Don’t take the computer metaphor too literally. There’s no separate disk and RAM in the brain, after all.
Tim Walters said: By that logic, one should pay to have prayers said for one’s soul.
Even if the probability of cryonics revival is miniscule, I would still bet that it’s higher than (a) the existence of a deity, (b) who could be effectively prayed to, (c) who would care about my prayers and answer them, and (d) the existence of a soul separate from material existence.
Bill Mill said: Thought experiment: tomorrow, John Q. Scientist reveals that he can, for the cost of $1 million, revive any person who has been cryogenically frozen. Say 1000 people are frozen cryogenically in an acceptable state right now. Do we revive them? Why? What if they will only get (maybe) another year? 5 years? 10 years? Who pays for it? What if it’s $100 million?
But it wouldn’t happen tomorrow. It would happen far enough in the future that the present (which will be the past, by then) will be interesting for historical reasons. If we would revive frozen people from the 1800′s, why wouldn’t we? From my view of human psychology, many of us would be thrilled to bring back people from 100+ years ago. The main barrier would be the cost, assuming the technology was there. And the more people we unfreeze, the more economies of scale come into play. The price of reviving people will only go down as time passes, due to technology improving.
Of course, we could have a scenario where museums pay to revive us, and then keep us as an exhibit to recoup the cost. That would make a great sci-fi story.
“Carl, why say that about cryonics funding in particular rather than money spent on going to the movies? Also, anything to do with Africa has to be extremely carefully targeted or it ends up being worse than useless—actively harmful—this should always be mentioned in the same sentence, since Africa has been actively harmed by most aid money spent there.”
Agreed, that’s why I linked to GiveWell, an organization that evaluates charities for their demonstrated effectiveness, but it’s worth being explicit about it here for those who don’t check out the linked site.
“Sufficient popularity of cryonics, if the world lasts that long, would benefit a very large number of people. African aid couldn’t compete, only existential risk mitigation could.”
I would say the following:
If you expect a Singularity to occur by 2050, and have a pot of money to spend on ensuring that current people make it, the best 3rd world health initiatives will be more effective per individual you pay to help directly than paying for cryonics. Even if you expect a later Singularity, you can invest the money in a fund to be spent the cheapest triage opportunities involve malaria and the likethen there are cheaper triage methods available.
Once the cheap infectious-disease triage opportunities are exhausted, cryonics can be scaled to a much larger total population.
If cryonics were to become acceptable and desired, people would pay for their own, so there may be more chance for individual adoption of the practice by OB readers to meaningfully boost cryonics growth than to trigger a growth in effective philanthropy.
Existential-risk mitigation is clearly much better than either.
“I’m willing to accept such a reply from people who (a) don’t go to the movies and (b) spend a large fraction of their disposable income on existential risk mitigation, but not otherwise.”
What if they agree with Derek Parfit that altruism towards our future selves and altruism to others are on a par with each other? Movies are a present temptation, but cryonics does not force itself on them, so they’re not motivated to favor their frozen selves over others who might make up the future population?
Bill: Alternative scenario: Tomorrow FedExKinkos announce a service through which, for $3.25, you can revive a random victim of the flu epidemic of 1918.
[I accidentally posted this on the previous thread and am shamelessly reposting here in case someone on the fence would have missed it.]
I signed up for cryonics with Alcor last summer after learning of it in the spring and doing extensive research. I am a college student in my early twenties, and the combined fee for my $250,000 level term life insurance policy and cryonics membership is EASILY affordable: $40 monthly.
I don’t plan on dying any time soon, but I have peace of mind knowing that I got a good deal on insurance while healthy and that I am not procrastinating on a potentially life-and-death decision. I consider cryonics arrangements be an excellent investment even if there is only a 0.1% chance of success.
I urge anyone dragging their feet because of financial concerns to at least research it enough to estimate the cost if you were to sign up today. You may find that working part-time for minimum wage would not exclude you!
Bill Mill: I continue to not understand the economics of reviving people in the future. Your argument here seems to be that reviving frozen heads, no matter the cost, is a moral obligation. That does not make sense to me.
He isn’t saying that it will happen “at any cost”. Obviously, there will be a time when reviving people will be too expensive. But you’re assuming that it will stay too expensive forever, even if people were, say, revived gradually during a period of two thousand years. That seems bizarre, especially considering how much money societies spend on charity, welfare and historical research even today—let alone how much they can spend in a post-Singularity future, when poverty might very well be entirely eradicated.
I agree with Carl that investing in existential risk mitigation is likely to be much more cost-effective than investing in cryonics. Eliezer, I don’t see movies and I donate most of my income to risk mitigation. Do you agree that donating $10,000 to SIAI is preferable to investing $10,000 in cryonics? If so, why not recommend the former rather than the latter? (And why don’t you donate your cryonics money to SIAI?) If cryonics subscribers come to feel they have a larger stake in the future, and only after subscribing decide to make larger donations to risk mitigation, then I could see your blog appeal being justified. However, I expect this is rarely the case. It seems better to encourage donations to SIAI, FHI, CRN, et al.
Well, people exposed to very low temperatures have ended up in states where they were considered clinically dead,
13.7C isn’t “very low” for the relevant purposes, and she wasn’t dead before she got cold like cryonics purchasers would be.
even though “volatile” functions had been interrupted
I’m not sure we can conclude this at 13.7C.
Interesting case, though.
Also, lower mammals have been frozen and brought back with no ill effects.
I’ve only seen this with cooling and super-cooling, not with freezing or vitrification.
Don’t take the computer metaphor too literally. There’s no separate disk and RAM in the brain, after all.
Of course. I was riffing on Eliezer’s metaphor.
Even if the probability of cryonics revival is miniscule, I would still bet that it’s higher than (a) the existence of a deity, (b) who could be effectively prayed to, (c) who would care about my prayers and answer them, and (d) the existence of a soul separate from material existence.
The point isn’t which tiny probability is tinier, it’s that unless you place literally infinite value on immortality (and if you do, you’d be living very differently from anyone I’ve ever met), you have to conclude that some avenues aren’t worth pursuing at $80-100K a pop.
Even if you place literally infinite value on being immortal, I imagine you’d rather spend the time wasted praying on something more likely to make you immortal, eg minimizing your chance of heart disease.
I strongly second everyone advocating SIAI over cryonics, especially Carl’s last paragraph.
I also suspect that informational reconstruction will make cryonics unnecessary, but not strongly enough that I wouldn’t be signed up even without the above concern.
I’d be very interested in hearing Robin’s explanation of why he signed up with Alcor rather than CI, and Eliezer’s explanation for why he chose CI rather than Alcor.
I would really like a full poll of this blog listing how many people are signed up for cryonics. Personally, I’m not, but I would consider it if existential risk was significantly lower OR my income was >$70K and would definitely do it if both were the case AND SIAI had $15M of pledged endowment.
I’d like to be a little more clear on this, I’ve heard a few different things.
Are there arrangements I can make which will ensure that a week after my death, my head will be full of cryopreserving fluid and my heart will be beating in someone else’s chest?
If we could revive frozen people from the 1800′s, why wouldn’t we?
If you could revive a frozen Genghis Khan, would you? What kind of life would he be able to live, if he were revived today?
Someone from the 1800s would suffer severe culture shock if he or she were revived today. Just think of what they’d have to deal with, from their perspective:
1) A nigger President 2) Sodomites and faggots embracing publicly and actually getting married to each other 3) Parents and teachers forbidden from properly disciplining their children when they aren’t respectful. Which they never are. 4) Ordinary young women dressing—and acting—like whores! Obscenity and shamelessness everywhere! 5) Heathen superstition and atheism replacing good, honest faith in God and the Bible
Chances are, it would look like most of what they found good and righteous in the world is gone. Would you inflict that on someone?
I’m almost certain that if most of these people found out they had cancer and would die unless they got a treatment and (1) with the treatment they would have only a 20% chance of survival, (2) the treatment would be very painful, (3) the treatment would be very expensive, and (4) if the treatment worked they would be unhealthy for the rest of their lives; then almost all of these cryonics rejectors would take the treatment.
I’d turn down the cancer treatment, if my relatives would let me. I certainly wouldn’t pay for it myself. Spend the money on, say, saving people from malaria in Africa—why is my own life that special?
As I said on an earlier cryonics thread, I don’t want the future to contain “me”. I want it to contain something better than “me”. Why resurrect a southern white slave owner from the 1800s and try to re-educate him when you could just have a baby instead?
HughRistik:
Of course, we could have a scenario where museums pay to revive us, and then keep us as an exhibit....
Die a free man, wake up a slave. Sounds like a winning plan.
Back to the main article:
Eliezer, Maybe I’m a lousy Bayesian, but I don’t see how the “if you don’t choose to buy into a cryonics package, you must not value human life” argument holds any water. That’s salesman talk. Can’t one demonstrate one’s valuing of human life simply by using one’s time carefully? A finite life well-lived is not a life wasted.
Equating the current facts of death with medieval superstition (“deaths are part of the plan”) is absurd. You wouldn’t speak of someone who accepted Newtonian mechanics as a simple rube who believes that “collisions are part of the plan.” So why are you using your blog to hawk the biological equivalent of perpetual motion machines? Perpetual motion machines for the exclusive use of rich people in the West, I might add ($40 is even a big burden for some families here, let alone the Rwandans you’re pouring your heart out over).
Anyway, some of us have to stay behind and run the fridges. Even harder than reviving the dead will be building a culture that doesn’t kick the plug out and thaw your damn fool head out too soon.
Eliezer,
So, Cryonics is the way for life insurance companies not to pay immediately in case if your body is frozen. Instead they would pay for keeping your dead body in frozen form and then quit paying after ~50 years when all your close relatives are dead.
Exactly. What are the chances that typical information that is [not securely] deleted today will be even tried to be restored? The chances are close to zero. The chances that average frozen body would be tried to be restored are close to zero too.
Not true. People are perfectly replaceable (for example, by other people).
Would you like thousands of Middle Ages’ peasants around you? Would you like to deal with their deceases? Would you spend your time and efforts resurrecting them instead of working on cloning the most advanced versions of human beings and/or AGIs?
Would you like to get thousands of modern Rwandan refugees around you right now?
Not really. I believe that the future would be better than the present. I also believe that people (or AGIs) in the future would be rational and wouldn’t maintain informational junk in high quantity. (“Informational junk” = lots of frozen bodies).
“to live” or “to be frozen to death”?
Of course, we could have a scenario where museums pay to revive us, and then keep us as an exhibit....
Chances are, it would look like most of what they found good and righteous in the world is gone. Would you inflict that on someone?
“The ‘wild man’ caught the imagination and attention of thousands of onlookers and curiosity seekers. He was then moved to the Museum of Anthropology at the University of California, Berkeley where he lived the remainder of his life in evident contentment....”
http://en.wikipedia.org/wiki/Ishi
I want you to live.
Fascinating. When most people say that in support of cryonics they are expressing naive future optimism. When Eleizer says it, he says it as the guy actually attempting the one impossible thing that could make cryonic preservation have greater than negligible utility!
Thanks, Noah. Sign me up. Shall I bring a pair of Kangaroos with me?
Exactly. What are the chances that typical information that is [not securely] deleted today will be even tried to be restored? The chances are close to zero. The chances that average frozen body would be tried to be restored are close to zero too.
I give you my personal guarantee that post-singularity, I will do all in my power to revive everyone.
“to live” or “to be frozen to death”?
People in coma’s even if completely unresponsive, still can be healed with a small amount of technological assistance and a huge amount of biological self repair (mechanisms that were constructed by evolution discarding countless bodies). What is the difference between that and healing people with a great deal of technology and very little biological assistance? A.K.A: repairing de-animated people in cryogenic suspension. None.
@Dennis
“The chances that average frozen body would be tried to be restored are close to zero too.”
Hmm. There is still great interest in Oetzi, yes?
EY, I’m not following your comment about CI versus Alcor. What do you see as the benefits of choosing Alcor, and what does your age have to do with choosing to forego them?
Chances are, it would look like most of what they found good and righteous in the world is gone. Would you inflict that on someone?
How about you let him quickly experience the last 200 years for himself. As quickly or as slowly as necessary, maybe even actually living through each subjective day, or maybe doing the whole thing in five years. Allow his mind to reconfigure itself to our newer (improved) understanding of morality by the same process by which ours did.
“Burger I think you overestimate the effect of agreeing to be an organ donor.”
That’s disappointing. I assumed with all the calls to allow payments to increase organ donations that the ones I’m giving gratis would get used and provide benefit.
And since there is the possibility of eternal life with cryonics why isn’t there a Pascal’s wager type situation? Not saying you should don a bubble wrap suit, but I’d think you would avoid convertibles, motorcycles, and other potential brain-liquefiers.
The process that improved our morality involved the hard-core bigots dying off. I suspect that it’s not a coincidence that the civil rights movement didn’t gain any traction until after all the Civil War veterans were dead.
Morality advances one funeral at a time.
On behalf of the Future, then—please ask for a little more for yourself. More than death. It really… isn’t being selfish. I want you to live. I think that the Future will want you to live. That if you let yourself die, people who aren’t even born yet will be sad for the irreplaceable thing that was lost.
So please, live.
standing ovation
Cryonics Institute, if you don’t pay for standby from a separate agency, is the cheap form of cryonics—they’re driven by the consideration of keeping the cost as low as possible to get as many people as possible on board. Alcor seems to me to be higher quality, and has a higher annual cost of membership. Which provider you go with should be determined by your age and probability of death, and by your financial situation. I’m younger than Robin and I expect poorer. So while I can’t speak for Robin, it makes sense that he would be with Alcor and I would be with CI.
“You Only Live Twice” is a beautiful, moving post, Eliezer.
Two sentences that stand out:
“If you’ve been hurt enough, you can no longer imagine healing.”
and
“And it [the capital “F” Future] may have a concept of sentient dignity that values your life more than you dare to value yourself.”
Eliezer’s not stupid. He’s innovating a competitive mythology to promote increasing the base of cryonics users. Carl, it’s hard to catch you making asinine comments but I think you slipped in this thread. So if a new blogger pops up quoting scripture to encourage christians to donate their brains to brain banks, are you going to debunk their arguments that scripture supports such donations? It’s a subtraction from our commons for status points we’re not even going to award you, in my opinion.
We will likely be unimaginably stupid and poor compare to people the of the future. I’m trying to understand why they’d want to revive us, but in doing so I’m trying to understand why our world has as much compassion in it as it does.
Somewhat off-topic, but have there been any OB posts on this issue? Economic inequality continues to increase on Earth, yet we seem to be treating each other better than we have in the past. Rich nations could more easily enslave poorer ones than ever before in history, but beyond some wealth redistribution (from the rich, no less), that isn’t being done. However future inhabitants of Earth become fabulously wealthy, it may be incorrect to assume they will abuse their power over poorer, stupider people, or pull the plug on our frozen heads.
I’m going to stick out my neck. Eliezer wants everyone to live. Most people don’t.
People care about their and their loved ones’ immediate survival. They discount heavily for long-term survival. And they don’t give a flying fuck about the life of strangers. They say “Death is bad.”, but the social norm is not “Death is bad.”, it’s “Saying “Death is bad.” is good.”.
If this is not true, then I don’t know how to explain why they dismiss cryonics out of hand with arguments about how death is not that bad that are clearly not their true rejection. The silliness heuristic explains believing it would fail, or that it’s a scam—not rejecting the principle. Status quo and naturalistic bias explain part of the rejection, but surely not the whole thing.
And it would explain why I was bewildered, thinking “Why would you want a sucker like me to live?” even though I know Eliezer truly values life.
If this is true, then I still don’t know how to explain it, do I? If bias isn’t enough to explain not being horrified at the lack of universal cryonics, so you must resort to “they secretly don’t care”, then you still have to explain not being horrified by the deaths of their loved ones. Or rather, being visibly horrified, but not taking this option to prevent it. Why would bias be enough to explain this but not the latter?
And you have to explain how they got so good at lying, too.
I think the earlier post doesn’t interpret what people say charitably enough—what the LW jargon would describe as failure to steelman.
Someone who dismisses cryonics by saying that death isn’t bad probably doesn’t literally mean that death isn’t bad at all. What he is likely to mean is that death is not comparatively bad to cryonics. This can happen either because death isn’t that bad or because cryonics isn’t that good—in other words, it’s just another way to express believing that it would fail, believing that it’s a scam, etc. after all.
Well, I probably don’t evangelise as much as I should with cryonics being such a low-hanging fruit and all, but I’ve still had conversations where people argued against living forever/ immortality isn’t as good as life on Earth, as a counter argument to cryogenics—this being after I’ve explained that yes, it exists and has a decent chance of working, in at least the most recent and fresh in my memory case (today, halfway through writing this comment.)
Does that answer your question? I’m not sure if I parsed your comment correctly.
My feelings are mixed on this. The community seems pretty naive, but hopefully naive.
One scenario that seems likely to me is that the frozen folk are mostly considered too boring to revive. Even luminaries like Merkle—what could he offer? By the time technology is available to scan brains for historical information, such information will have already been scanned from the minds of still living, much smarter, people who are standing on the shoulders of those who stood on the shoulders of Merkle. The value of the historical information might result in a research grant for a data dump, which data is then forensically analyzed in a lab using modern techniques (but not revival).
Or in another scenario, the revival takes place, but only in a lab, and only during work hours, and only in between the other research projects.
Or in another scenario, the revival is not a single event, but rather the mind of the individual is replicated and placed into a million schools as a read-only appliance that sits in a corner of an AV room on a cart. Once a year, the cart is wheeled into a fifth grade classroom and children get to probe it with a few questions from the standard rubric, then it gets wheeled back into its place. Or if you prefer, instead of a physical cart, it could be on a web service. Your brain, accessed via a REST architecture, but why do they keep asking only the same 10 questions?
In another scenario, Islamic government has arrived and they decide you need to be punished for the sins of western civilization.
In another scenario, the entities who wake you up are not interested in learning anything from you other than your darkest, most embarrassing personal secrets. These become your new public identity, and your former accomplishments are forgotten. As a result, when you are revived, your only opportunity for participation in life is as the butt of many jokes. You’re still alive, and you are thankful for that, but your second life is a big step down.
In another scenario, the reanimation / revival process is more painful than anything that can be imagined.
It seems to me the range of potential bad scenarios is wide, and this is rarely mentioned by the freezer-folk. The better scenarios, ranging from something like Woody Allen’s “Bananas” as a lower bound and getting better from there, seem unlikely.
I’m going to stick out my neck. Eliezer wants everyone to live. Most people don’t.
People care about their and their loved ones’ immediate survival. They discount heavily for long-term survival. And they don’t give a flying fuck about the life of strangers.
Exactly.
People die.
Maybe that sucks, but we live with it, and we don’t really care. If we really cared about saving the lives of strangers, we’d be donating to Population Services International instead of buying high definition television sets.
facepalm Not this again.
Yup, because all humans are perfectly unbiased, rational, and liars. (I was just gonna put in a satirical counterexample, but honestly I can’t be bothered, we’re on LessWrong for godsakes.)
Hopefully,
I didn’t intend to attack the self-interested case, and I’ll say clearly that people who would spend thousands of dollars on medical care to extend their lives from 70 to 75 for self-interested reasons, would almost all do better to sign up.
The equilibrium where we pay $50k less on end-of-life medical care, that might provide six months of life in pain, in favor of cryonics would be greatly superior to the current arrangement. I would be happy to buy a health/life insurance policy that cut back on end-of-life care in favor of cryonics, and this would also reduce the financial incentive for heirs to oppose cryonics.
In Terry Pratchett’s Discworld novels, there’s an ongoing subplot about the golems, which are immortal beings controlled by the words in their head. Through some chain of events, one golem ends up having the words removed from his head and becomes an autonomous individual. He then sets up a Golem Foundation that works to locate or purchase golems and liberate them.
Thus, I see singularity mailing lists and OB as, in part, efforts to replace the racial fellow-feeling of the golems with an artificial sense of community. When Eliezer says he wants you to live, it’s important.
Dennis (and others) argue: The chances that average frozen body would be tried to be restored are close to zero.
I keep seeing variations of this argument, and it strikes me over and over as a ridiculous argument because it utilizes a number of faulty premises in creating its conclusion: mainly, that an economic or utilitarian model is or will be the driving force behind such a choice; and more deeply flawed, that humans’ choices are or will be some set of monolithic guidelines to which all and sundry do or will subscribe.
That is, the claim arises, “I wouldn’t bring back a bunch of people from the Middle Ages, because I don’t see the value in it,” and to make the argument the speaker assumes, “therefore no one would do so or see value in it.” Ignoring both the logical error and the fact that there are people today, who have such funding, who would like to bring a mammoth back to life, or a Bronze Age man frozen in a glacier—even if it is only (today) through some process of cloning, entirely despite the absolute lack of economic or utilitarian need to do so, or the nay-sayer’s own views of the value of such an attempt.
The frank truth is that the world, its governments, its scientists, its researchers, and so on will not act as one, and any argument founded in any way upon the belief that they will are poor, thoughtless, narrow arguments. The world of the future, like the world of today, will have plenty of individuals who will be fascinated by or even driven to bring back the (frozen) dead for one reason or another.
Thus the claim that no one will ever want to bring back the (frozen) dead is about as much nonsense as the claim that “It is inconceivable that anyone would ever want to give birth to a cloned human baby, much less put any amount of research into such a possibility” or “It is inconceivable that anyone would want to eat or grow cloned meat or genetically altered grains.” A claim clearly disproven by the fact that some groups clearly would like to do this and have been researching the possibility (or even claiming success in such an enterprise) right now today in the face of years of previous skepticism.
So the truth is that SOME people won’t, but SOME people will. And some of those who wish to, will have, find, or create the means to do so.
I’m a member of Alcor. When I was looking into whether to sign up for Alcor or CI, I was comforted by Alcor’s very open communication of financial status, internal research status, legal conflicts, and easy access via phone, etc. They struck me as being a highly transparent organization.
Someone from the 1800s would suffer severe culture shock if he or she were revived today. Just think of what they’d have to deal with, from their perspective:
But cryonics isn’t about bringing back random people from the past whether they like it or not. Cryonics is about bringing back people who have explicitly consented for being brought back in the future, and who are prepared for awakening in a radically altered world.
@Sigivald: You’re right. A sufficiently severe economic downturn will kill Alcor and CI dead, along with all those currently in cryostasis. Economic/political/infrastructure instability is the biggest “existential risk” for cryonicists, but nobody can be arsed to prepare contingency plans for it because I guess it’s doesn’t have the sexy science-fictiony cachet of asteroid hits or grey goo.
Reverse absurdity bias anybody?
Disclaimer: I am signed up.
Intrepid cryonauts are making a few assumptions, which may or may not be reasonable.
- That in the future it will be possible to revive or otherwise reconstruct themselves via some as yet unspecified [magic happens here] technology.
- That they will be restored to that status of some type of functioning being, either biological or perhaps simulated, or a combination of the two (cyborgian).
- That intelligent entities living in the future will want to restore the remains of creatures who lived in the distant past. These entities may not share the same views as the cryonauts, and may wish to use the remains for some other purpose than the cryonaut originally intended.
- That the restored cryonaut will be able to live some kind of meaningful existence. This depends upon what you regard as “a life worth living”. Future entities may only be interested in the cryonauts remains as a museum curiosity, rather as ancient Egyptian mummies are regarded today, or simply as a raw material for some other purpose. Incidentally I think there are some parallels between the beliefs of ancient Egyptians and modern cryonics proponents.
Overall, cryonauts are making a lot of faith based assumptions about the beliefs and motivations of entities living in the possibly quite distant future, typically assuming them to be similar to contemporaneous belief systems.
If you want to securely erase a hard drive, it’s not as easy as writing it over with zeroes. Sure, an “erased” hard drive like this won’t boot up your computer if you just plug it in again. But if the drive falls into the hands of a specialist with a scanning tunneling microscope, they can tell the difference between “this was a 0, overwritten by a 0″ and “this was a 1, overwritten by a 0”.
As far as I know this has never been confirmed. See http://www.nber.org/sys-admin/overwritten-data-gutmann.html for more details.
After some investigation on the availability and costs associated with cryonics, I’ve been wondering at the apparent lack of discussion and work towards thawing.
The way I see it, there are a lot of questions related to hawing, which not all depend on waiting for technological improvements, that, depending on how they are answered, can make the whole process not really worth while. Sure, maybe I can put all the force of the law on my side to make sure I get frozen and preserved, but what can I do to make sure that un-freezing is performed on terms that I find favorable?
edited: late night brain is broked
Personally, I don’t think I’m going to be “un-frozen” (since I will probably only go for Neuro-Preservation, instead of Whole-Body Preservation), but rather at some point, the ability to retrieve my pattern will exist, and at that point… I will have to decide what to do with my frozen head. That part still troubles me, but only for sentimental reasons or attachments I have to my body (or parts of it).
Now, I am certain that there are quite a few Whole-Body-Preservation clients of Alcor who are expecting to wake up in the same state in which they were frozen. For those people, yeah, we are going to need to get busy. I think, though, that they might not have understood what they were doing in the first place if they expected the same body considering the damage the freezing does in the first place (which is much less now than when they first began the process)
I used “un-frozen” loosely. I’m not that concerned with the actual technology, but with the other circumstances.
For example, I make the (I think fairly reasonable) assumption that whatever the process is going to be, it will likely be more expensive than, for example, growing a new human being from scratch. So, whoever will do the restoration will be looking to profit in some non-trivial way from it.
One thing that I am fairly certain my frozen head won’t be able to do, is influence decisions around who is allowed to restore it, when, and at what tradeoffs.
You assume that your pattern will somehow be pulled out to somewhere—would you be ok if it’s an irreversible process (say, your head has to be destroyed)? What if whatever you’re expected to do in payment for being restored is something you’d rather not do? Etc...
To me, the expense of awakening the frozen seems to be evidence that the only futures who would revive them are ones acting from moral duty or historical curiosity, not ones that are looking to profit from them. If you’re just looking for labor or profit, it seems there would be much cheaper alternatives at that level of technology.
I accept that there’s obviously a long tail of possible futures, but it looks clear to me that the probability of being revived given a really positive Singularity is much greater than that of being revived otherwise (especially given my estimate that lack of a positive Singularity implies high odds of total extinction of humanity and human values). My priors could be way off and cryonics would still be a good bet on balance.
I don’t trust my ability to predict what kind of profit there is to be made in reviving corpsicles. Whether it’s a simple thank you, or 30 years dedicated to writing history, or freeing up of electricity for some other uses, or experimenting with un-freezing techniques before trying them on more important people—I don’t think it really matters.
There are some obvious, and some non-obvious, ways to hedge this bet, though, and I’m surprised at how little discussion and effort has gone into them. The cost of freezing, at the minimum, might contain funds to pay for the unfreezing, too....
I don’t see how that follows. Alcor and CI are nonprofits, and you don’t know how cheap revival will be / how rich the world will be.
I’m a casual observer who came across this advocation of cryonics—I have no objections to the idea and it interests me on a theoretical plane.
The general impression I receive of the promotion of cryonics quite a simple and effective argument:
“Cryonics offers a non-zero possibility that you may be able to continue your existence beyond your first death, the choice of which during your first life has minimal cost or even possible benefits to yourself.”
Interestingly enough, and I quite happily mention this in good faith despite the inevitable flame risks of mentioning un-kosher topics in public (though that sort of blanket disregarding would be regrettable), this bears intriguing parallels with one suggested argument of a more metaphysical nature: substitute “belief in a higher power” for “cryonics”.
I think the other respective elements of the sentence tally—given the lack of a strict disproof of such deities, the only part I can see possible challenge is the second half, but given it has immediate lifetime wellbeing benefits to some people, I don’t see such challenges standing.
Anyway, the point I am making is that given its superficial similarity to investing an afterlife, even though I am quite friendly to the idea of cryonics (it costs little and if real has gigantic benefits to oneself) I can see why it is a hard sell to people, not because they think you are a religion but because it involves the same concepts of investment in an uncertain outcome.
Most religions I know of don’t have an “obliterate your immortal soul” option. You live eternally, pleasantly or no, whatever you do. So belief in a higher power isn’t analogous to cryonics in that way, because if cryonics works and there is no afterlife, it’s existing v. not existing, rather than heaven v. hell or cow reincarnation v. dung beetle reincarnation. Granted, fussing with the stakes like that may be a point in religion’s favor, but at that point you’ve just gotten a warmed-over Pascal’s Wager.
True, in the positive/negative dichotomy it is a Pascal’s Wager.
Probably what makes the sell harder for cryonics is that it promises not an infinitely good future but merely one of uncertain quality, though one that it is possible to hypothesise about based on well-discussed inferences from the very fact you were woken up.
As things stand right now I have to admit it’s hard to see where you would get a big jump in takeup, because it seems conceded that the science is a very long way away and thus the probability of it working will not appear to rise for a very long time, and also the impression of a future world where it does work will probably remain roughly constant for the same reason of the time gap. As these two factors seem to be the biggest factors in a decision, they seem too inert for cryogenics’ liking.
Any ideas of a “game changer” that would persuade people that it were as natural a thing to plan as making a will, buying life insurance or having a donor card?
A corollary to this is the fact that some people who are clearly considered alive by the medical system might be good candidates for cryonics: Alzheimer’s and mad-cow type diseases destroy the most important brain patterns, and at some point a rational person would take a chance on cryonics.
I read somewhere that on average something like 80% of individual’s medical expenses get spent in the last year of life. Clearly much of this is futility. Imagine if this money was used to actually put scientific resources into cryonics? Really, really sad.
I have a standard answer for cryonics advocates: ask me in 10 years.
In 10 years, I’ll be 32, and if all goes well I’ll have my life together, I’ll be able to point to a few successes, and I’ll be able to say that my life isn’t a waste. If I like being alive at 32, I’ll probably like being alive hundreds of years from now. On the other hand, if I’m 32 and everything has gone wrong, and I’m down and out, and I wake up every morning wishing I hadn’t, then I’m probably not going to want to live one more year, let alone hundreds.
In the meantime, I really don’t know. I’m in limbo. Sometimes I want to be around to see what happens next, sometimes I really don’t. Sometimes I’m crazy excited about planning for my future and how great it’s going to be, and sometimes I feel certain that I’ll never make it, and all I want is to have never been born, and anyone who thinks well of me must either be lying or must be a loser himself.
So… call me in ten years.
Perhaps happy you and sad you should be considered separate entities. Sad you wants to die, happy you wants to live. So your goal should be to kill sad you without killing happy you. Antidepressants maybe?
From my perspective, I think you’ve set too high a bar for yourself. I’m 51, with no very notable successes. I find just the ability to enjoy a sunset and a good meal sufficient reason to want to go on living, and were sufficient to motivate me to join Alcor. (Now the odds of being successfully revived are quite another matter, and subject to much disagreement.)
Well?
It’s still a few months, though I am curious about the answer.
Modulo the specific numbers 10 and 32, this is exactly how I feel about it. I don’t think I’ve ever seen anyone express this point of view before!
That said, if signing up for cryonics were easy (as in a 5-minute form online, say), there’s a good chance I would have signed up already during a “good” moment.
So it’s been 10 years. How are you feeling about cryonics now?
Surely one can expect the long-term future to be different enough that one’s quality of life 10 years from now isn’t a particularly trustworthy estimate?
It isn’t the future I’m worried about, it’s me. If I don’t like me then I’m not looking forward to any future; if I like me, then I want to stick around and see how the future turns out (except perhaps the most horrible possible futures.)
Future will change you as well.
What if your take on life remains just as variable as it is now?
then it probably isn’t worth the money.
So, in that case, what would be an appropriate use of that money?
It’s been ten years. How are you enjoying life?
For what it’s worth, I value you even though you’re a stranger and even if your life is still going poorly. I often hear people saying how much better their life got after 30, after 40, after 50. Imagine how much larger the effect could be after cryosuspension!
In 10 years, I’ll be 32 as well. My main reason for trying to put off procrastinating is because I know I’d be kicking myself (metaphorically) if I died when I was 31 due to some stupid accident.
I’m in the process now of trying to figure out how to spend my first few decades in a way that will be most conducive to making the future an even better place to live.
For me, I really can’t see the downside to signing up. Life insurance is something most people sign up for anyway and the additional ~120 bucks for cryonics is pocket change. I mean, common people; That’s like the cost of Netflix!
Life rocks and I want to go on living for as long as I want. If I get bored in 2 million years, I’ll reserve the option to check myself out. Or, more likely, I’ll just change what I’m doing.
As far as I’m concerned, if it only costs twice as much as an X-Box live subscription and it might (with varying degrees of hopefulness—I’m at the high end) procure my immortality, worst case scenario it’s money well wasted.
So, OK, I’ll bite: can anyone point me at a reasonable legal/economic analysis of why I should trust an existing corporation (and its various descendants) to continue preserving my frozen brain for long enough to be revived?
Despite the context, I don’t mean this to exclusively apply to cryonics… I have the same question about why I should expect a cemetery not to dump my body somewhere and re-sell my plot to someone else once N years pass without anyone visiting my grave, as I do about why I should expect a cryonics corporation not to dump my skull and re-sell the storage space.
The board managing Alcor’s trust fund is deliberately made up of people who have relatives or significant others in cryo preservation. It’s structured so that the people in charge have incentives exactly against doing this.
That’s pretty cool. Thanks.
I would imagine that the people involved with cemeteries care more, but I’d also think that it’d be pretty hard to dig up even one grave without the one relative that does visit a neighboring grave from noticing. But maybe it is pretty common; especially over centuries or longer.
It is in fact very common, at least in some parts of the world. Once the body has decayed to the point that the cemetery’s management doesn’t feel awkward digging them up and that any living relatives have stopped paying attention there’s really nothing stopping them from re-using the plot.
This is an awesome post. May I take parts of it, add some background info, and share it with a discussion group I’m in? I want to introduce them to cryonics and anti-deathist ideas, and this piece conveys the spirit of it well.
If I wanted to share a public blog post with a discussion group, I wouldn’t feel the need to ask permission first!
You might want to email Eliezer, you can find his contact info on yudkowsky.net, though in general I think it’s a safe bet that Eliezer is pro-anti-death-propaganda.
retracted
You make no sense whatsoever.
Yeah, life can be boring and dull sometimes… but your life in 100 years may be as different from yours today, as that of the average 18th Century peasant is from your blessed existence nowadays.
Life can be fun and any future that has the technology to revive you would presumably offer you plenty of fun, and they also would be able to rid you from any of your personal and psychological shortcomings if you wish. In the meantime, go and smarten up with some positive psychology. Doesn’t help everyone with everything, but it’s as good as it gets for now.
This comment may be like fingernails down a blackboard to many people here, but I’m going to say it anyway:
I think that, for almost all values of “you”, you do more good by having children and bringing them up well than by expending resources to preserve and finally resurrect yourself. There’s no reason why a generation of resurrected corpses should be in any way ‘superior’ to the generation of regular or enhanced human beings who would otherwise be eating their food and breathing their air. Likewise, if you’ve had profound ideas during your life, and you write them down, then others can pick them up later, with minds just as sharp and fresh as yours ever was.
As you imagine the human/posthuman population in the forthcoming centuries, why should it make the slightest difference whether any of those individuals have memories of being you?
It “feels” like cryonics gives you hope, but I think this “feeling” is rationally indefensible.
I will concede one small argument in favour of cryonics, which is that future generations will find those ancient corpses very interesting (especially if they can reanimate them, but perhaps even if they can’t). However, I don’t believe this is good enough reason for the ‘average Joe’ to freeze themselves. A billion frozen “average Joes” cost a thousand times as much to maintain as just a million, but are much less than a thousand times as interesting (especially if that million contains all of the “non-average Joes”.)
Because I don’t want to die.
The sorts of people to whom cryonics (and large-scale longevity research in general) seem compelling are, I think, those who consider individual lives valuable in and of themselves, rather than valuable as a means to an end.
Talking about whether that’s rationally defensible turns very quickly into a discussion about what the rational values to have are, which is kind of a dead end.
Also, minor note: freezing a billion people should cost significantly less than a billion times as much as freezing one person.
If you care about having children, why not do both?
That’s not nearly true.
I suspect that the act of having children causes most people to care less about their own wellbeing, and more about the wellbeing of their children.
As thus, I intuitively find it a good idea for anyone reading this to sign up for cryonics before they have children (conditional on them already being interested in signing up), in case their desire to be cryopreserved dwindles after they have children of their own.
I think the resources spent on cryonically preserving yourself would be better spent on other things. Yes, one of those things is “having children and raising them properly”, but it seems likely that one would soon see diminishing returns if one ‘overinvested’ in one’s children. Happily, there are plenty of other worthy causes to take up the slack e.g. donating money to charity, building a garden shed, putting extra insulation in your loft, etc.
Actually, I think cryonically preserving yourself has negative value unless there’s some way in which you will be ‘interesting’ or ‘useful’ to future generations. For some people this will be true, but not very many.
I think that if people could somehow ‘stand back’ from their own biased perspective on the world, they would realize that there isn’t anything preferable about “one of the people in the far future having distant memories of being me” as opposed to “no-one in the far future having such memories.”
Aye. I wasn’t thinking when I wrote that bit, so as you can see, I’ve edited my original comment.
Not negative value to me. I refuse to pretend I am not self interested.
I don’t see why this is a good idea. Why would anyone want to abandon their own perspective?
Abandoning one’s perspective isn’t the same thing as abandoning one’s values (as I interpret the words). For example, in the default human perspective, it’s natural to be indifferent to other people’s experiences: we don’t care if others are suffering, as long as we’re doing okay. But from a more global perspective, this is kind of bizarre: the experience is still there even if it’s not you who’s experiencing it, for the same reason the world doesn’t actually go dark when you close your eyes.
This sounds like a value, though. I interpreted your comment as saying “if people stood back from their perspectives, they would see their values aren’t preferable”.
I agree.
Note that I am not AlephNeil; I assumed your question wasn’t necessarily directed at him specifically.
Foiled by the anti-kibitzer!
You might want to improve your perspective-providing heuristics (you’ll do so from your own perspective, inevitably, although that’s not a reason to approve of this necessity, and could be a reason to do something else instead of attempting to improve the heuristics with the imprecise instrument of those heuristics).
I appreciate this is an old comment, but you seem to be neglecting the fact that death is bad.
Wouldn’t that exact same argument apply to having kids and replacing “is descended from” instead of “having distant memories of”?
Or even replacing “the far distant future” with “now” and “distant memories” being “currently remembered”?
That might be true for some people, but most people have a ‘part to play’: If you ‘delete’ such and such a person then you typically cause ‘damage’ to their social circle and perhaps even the wider world—grief, loss of valuable skills, unique stories, insights etc. which may have considerable local value, and thereby help to keep the human race going, even if they’re subsequently forgotten.
However, if you look far enough into the future, any given person alive today (with a few exceptions, like a great artist or scientist) becomes ‘obsolete’ - all of the things in the here and now that make them ‘valuable’ gradually disappear. At best they become an interesting relic of a bygone age.
Therefore, if I let go of the idea that there’s something intrinsically valuable about the psychological continuity between my dying self and my post-cryonic, reanimated self, then cryonically preserving myself doesn’t seem to have any purpose.
(ETA: It’s not that “death is good”, but that what makes death bad has already happened by the time your frozen corpse is reanimated.)
Isn’t arguing about having a ‘part to play’ inconsistent with talk of intrinsic value? Or are you arguing some form of expiration-date-labeled intrinsic value?
It seems that any loss of intrinsic value are a decay, and not an immediate severance. One might make the argument that your continued self was exactly as valuable until the day your last friend/family member dies. For surely, your loss caused damage to them and the wider world. So would you be willing to cryo-preserve yourself under stipulation that you are thawed and buried with your last living relative who personally knew you, but to bring you back if possible before then? Of course, arguing about likelihood is a different matter.
I think that, for all values of “you”, your first option terminates “you” and your second option continues “you”. If you are concerned at all about “you”, the second option is clearly superior.
How can those who opt for cryogenics be considered selfish? Wanting to live, or more accurately keep living, is a base desire in most animals. Let’s say that Alice attempted to kill Bob, to delay death, but was instead killed by Bob. Who holds the moral high ground? Is it Alice, who wanted to live a day, and another day, and another day more? Of is it Bob, whose life was bound to be happy and productive? What justifies killing another completely sentient being in order to keep living?
There are, though, a few blocks...
For one, I’m not financially independent, and my parents so happen to be Catholic-ish, so they think my dreams of immortality are foolishness of young age, and that cryonics wouldn’t work because of “souls”, whatever they may mean by that.
Also, I happen to live in the southeastern corner of Brazil. I’m… not positively sure that Alcor can reach me, let alone the CI.
I cannot, also, just quit college and teleport to the US and hope for the best. And I will, obviously, sign up as soon as I have the ability to do so and move to the US, and hope that I’m not hit by a car in the meantime.
Still, it’s not exactly a dream I can achieve right now. Sadly.
Okay, I’m convinced (actually, I was convinced when I first looked into cryonics but I irrationally put off signing up due to the minor inconvenience). However, I don’t really know how to best go about signing up, and a google search didn’t really clear it up. I am a 17-year-old male living in Canada, could anyone tell me what the best way would be for me to go about signing up for cryonics? It seems that the cryonics sign-up forms are targeted towards people over 20.
I’m still having trouble trying to assign value to cryogenics. Mostly because I’m having trouble assessing the odds of it working.
Let’s say, for the sake of argument, that signing up for Alcor costs $500/year, for the next 40 years. That comes to $20,000 (more then that if I invest some of the money earlier instead, but let’s skip that for the sake of argument). It does seem that there’s a non-zero chance that cryogenics could save my life. On the other hand, I’m having trouble weighing that against the chance that that same $20,000 40 years from now might otherwise let me purchase, let’s say, the first generation of longevity treatments, the “first bridge” as it’s sometimes called, which will let me live long enough to make it to the second generation of longevity treatments, ect.
I’m not even sure I have enough information to sensibly compare those two probabilities at this point.
Jeff’s spreadsheet might help you evaluate the odds. (Although see the discussion of this approach.)
Um… there are lots of other reasons why someone might not sign up for cryonics. For example, my reason. And when I explained my reason to you (Eliezer) in person, you didn’t seem to have any strong rebuttal to it, though you did give a personal reason (that didn’t reply to me) for why you are signed up for cryonics.
Also, as an aside, of the 10-ish people I’ve spoken to about cryonics for more than 5 minutes, 2 of them turned out to have the same reason as I did for not signing up. (They were skeptics/atheists and accepted that cryonics might be feasible in the future, just not confidently-enough desirable.)
Well, that was damn good. All the stuff at the beginning kinda hides the fact that what comes next is really important, though. A bit of a slow start—when I link people to this, I’m worried that they might be turned away by it.
OK, actual comment now …
I appreciate this kind of takes away from the point, but time travel is a possible future technology that might ever be developed :P
1) I think it’s important to keep in mind that consciousness might not be determined on the neuronal level. It might be determined at the atomic or subatomic level. It’s encouraging that there are cases of people losing consciousness for various reasons (http://www.alcor.org/sciencefaq.htm) who regain consciousness, but we don’t know how cryogenic freezing or how death change the equation.
2) We don’t know what happens when we die, and how to value that. For example, personally, I rarely remember my dreams. If I didn’t know better, I’d think that I just go unconscious every night, and wake up conscious. But, it turns out that every night I experience dreams. Which means that… in a way… I have multiple conscious experiences every night—I just don’t remember them. How do we know that dying isn’t like this? On a very basic and fundamental level, we don’t understand what happens to make us conscious. For this reason, dying seems to me like a big question mark in the equation of calculating expected utility values.
My opinion is that death being a question mark gets an expected utility value of zero (could be good, bad, nothingness… idk), and that cryonics get a slight/moderate positive expected utility value. But I still feel very uneasy about all of this. I feel like I’m making a decision about something with tremendous importance (eternity), and have frighteningly little information about it (death, efficacy of cryonics). Other people seem to be relatively comfortable just choosing cryonics and not thinking twice, and I don’t know why this is.
On #1: smart people have looked for atomic or subatomic events that could make our consciousness non-classical. They’ve found none. As far as I’m concerned, this is a dead hypothesis. But of course others may disagree.
On #2: not to put too fine a point on it, but all the evidence says death acts just like damaging part of the brain—it stops working—only applied to the whole brain. The thought of some consciousness persisting in the absence of all known function is rather horrifying, now that I come to think of it that way! Happily, we have no reason at all to think it’s true.
Extrapolation “present is better than Middle Ages” → “future will be better than present” is rather wrong. Compare paleolithic hunter, neolithic farmer, medieval serf and early industrial age factory worker—and you will get an impression that progress makes life worse. It may be that we just live in short golden age between Malthusian past and Hansonian future.
I wouldn’t let one persons powerful writing alone decide such a thing, but for the compassion you show to everyone’s’ lives, I suppose the least i can do to repay that, would be by learning more about it. I don’t happen to care much about what happens after the lights first goes out, and extending the care for my life to beyond my own death isn’t really a part of my reasoning. I’ve never been religious or thought of an afterlife, so I don’t even have that practice. The dead don’t care
I think cryonics is very promising but the process of bringing people back from frozen state will need a lot of research and practice.
I would like to volunteer to go in as a research subject if someone else will pay and if any data mined from my remains is released as open source historical data under some reasonable license, for example the Perl Artistic License, with myself listed as the author of the raw recovered data. (I wrote it into my memories, no?)
People could then use the mined data, such as it is, for research on personality reconstruction or any other ethical purpose. I would be quite surprised to find my mind reconstructed with continuity of identity, and perhaps quite pleased, but that’s not at all necessary; I believe the Universe will keep the reference copy, if any, of my key information in distributed form, so I’m happy to make myself available for practice material for future entities (more likely than not Friendly AI type people) who wish to practice on volunteers who are indifferent to any mistakes in the attempted reconstruction process.
I do think it would behoove the cryonics community to find volunteers such as myself willing to undergo this sort of experimentation. If I had the money to invest in freezing myself with an eye to later reconstruction, I would certainly think it a good investment to help pay the cryonics cost for a volunteer willing to be the practice dummy for aspiring future Revivalists.
Are any of the cryonics enthusiasts here aware of a call for volunteers from any cryonics institute or group? A cursory search did not lead me to anywhere to sign up for such a program.
This is a serious request and offer, I would be quite happy to be frozen and datamined, primarily for the benefit of future historians and scientists but also would be very pleased if I could in some way help the people who are hoping to be revived with intact minds someday.
I would request that any personality constructed or reconstructed from my data be offered control of a mercy switch that could turn off whatever process is emulating its consciousness.
Thanks for bringing this up! I never knew this stuff existed. I’m expecting that a better version of this comes along at some point where they can store copies of our brain, computerized into an AI-like form. And then post us into a new body, bionic or organic. some long time into the future. More or less immortality. Not an expert, but I can see it developing at some time.
I think the general idea here is that you stay in cryosuspension until the better version comes along, because most of the people on this forum don’t expect the better version to arrive during their lifetimes. (I do, but I’m a teenager, which means I both rationally expect to live to a later year than the average adult does, and irrationally underestimate mortality risks as they apply to me.)
Insurance is a way to avoid catastrophic losses. It is not a way to reduce costs. On the average, an insurance company’s customer will pay more in premiums than the amount paid out by the policy. If $X is too much money, $X is too much money even if paid by insurance.
If you’re paying for more insurance than you need, and it’s enough more to pay for $X worth of cryonics, it is also enough more to pay for $X of something else. Money is not free just because it comes out of waste; there is still the opportunity cost of not being able to use it for something else once you stop wasting it.
Hard drives don’t decay, not in the time period covered by the analogy. All that is erased is what you specifically erase. A proper analogy to what happens to the brain after death would be some process that affects all parts of the hard drive whether someone specifically chose them or not. Thermite is actually a pretty good one—death is a lot more like erasing a drive using thermite than erasing it by overwriting it with 0s and 1s.
I also see no reason why future technologies will be able to recover a drive overwritten with 0s and 1s. Erasure and recovery are asymmetrical; you can’t improve the erasure method and always be able to make up for that by improving the recovery method. If it’s really erased, it’s really erased.
Notice something all these statements do? They imply that probabilities are irrelevant. You just need to have hope in the future—any finite quantity of hope will do, it just has to be a little. You just need to value human life; the probability of getting that value doesn’t matter. For all that proponents of cryonics claim they are not actually advocating Pascal’s mugging, suggesting that people should buy cryonics on the grounds that it has some chance of letting you live—and that the size of that chance doesn’t matter—is a recipe for Pascal’s mugging.
Just dying isn’t much like erasing a drive with thermite. Damage from ischemia takes time. It’s not like your brain instantly turns into pudding the minute the nearest doctor says “time of death”. Now, dying and then rotting in the ground somewhere for 50 years is a lot more like erasing a drive using thermite than overwriting it. That’s the point of cryonics.
Edit RE insurance:
Of course this is all true. However in the case of life insurance it is also a way to offload the expense to your future self, who presumably has more income than you. If I had to pay the whole thing upfront it would be certainly impossible for me to get cryonics at my current age.
Actually, now that I think about it, it is potentially not true that you would pay more in premiums than the payout, since insurance companies can make a profit on people who let their insurance lapse before dying (which is apparently quite frequent in life insurance). Picking two random life insurance company’s websites, it looks like a healthy human of my age could pay as little as 75% of the payout in premiums, assuming a life expectancy of 70 years.
Insurance is priced so that if you buy insurance for some period of time, the likelihood of dying (and thus the payout) during that time is balanced by the premiums. This applies just as much to people who let their insurance lapse as to people who people who intentionally buy insurance for only a limited period.
Note that someone who lets his insurance lapse will not only not get a payout, he will also not be paying premiums after he lapses. Since the post-lapse premiums are balanced against the post-lapse payout, and the no-lapse premiums are balanced against the no-lapse payout, you can subtract the two scenarios and conclude that premiums-with-lapsing are balanced against no-payout-after-lapsing.
Wait? You are using whole life? Buy a 30 year term policy (get ~5-10x the coverage benefit) and invest the difference in the premiums. You’ll be way better off. Or get some guaranteed no-lapse universal life. Whole life in is never the answer.
They also invest the money for profit, which allows them to take in less than they expect to pay.
Yes, but you might fall into that category as well—future is uncertain. The expected value of the payout is less than the sum of premiums (after proper time discounting).
Eliezer, thank you for writing a beautiful post. I do hope that the people of the future value my life more than the people of the present, and the fact that there is at least two people in the present who do (Eliezer and my mom ;-) ) is heartening.
I am quite convinced about cryonics in general, but I am not convinced at all that paying up right now for CI or Alcor is a smart investment. What’s the downside of just setting aside enough money for cryopreservation and choosing the best option when death looms?
Consider:
I am 27. If I die suddenly (without regaining consciousness even for a day) in the next decade it’s likely that I would die in a fashion (shot in the head, car crash) that won’t leave much of my brain to be preserved.
The chances that I’ll be in the US when I die are very far from certain (I’m a foreign citizen living in NYC currently).
If I decide a decade from now that I don’t want to cryopreserve, the fees would have been money wasted. I can’t force me-in-10-years into a decision.
Judging by the progress of modern medicine (advances in cancer treatment) and my family history (pretty good from a cardiovascular standpoint) it is very likely that my ticket out will be Alzheimer’s or another neurodegenerative disease. In that case, cryopreservation will only make sense if I commit suicide at the very onset of the disease and am frozen right away which may not be possible. If I get Alzheimer’s I may as well donate all my money to SIAI or Africa.
If the future is going to move in the direction we are hoping to, it’s not unlikely that there would be more companies offering cryopreservation with better deals (e.g. lower fees, global coverage, eternal investment trust management).
Basically, what is the upside of signing up for one specific company and paying the fees vs. knowing that I have made the decision to spend the money on cryopreservation instead of life-prolonging treatment and trusting my future cancer-diagnosed self to be brave enough to keep it?
Consider two possibilities:
1) Alzheimers breaks long-distance communication more than it does actual information such as memories. Cf moments of lucidity. It’s not clear how true this is, though.
2) It may in fact be possible to undergo controlled legal death at the onset of death in some number of years. See the Oregon laws, which are likely to start to be passed elsewhere. See also http://www.evidencebasedcryonics.org/2012/05/09/revisiting-donaldson/
I want you to live too :)
If you buy life-insurance, then that isn’t wasted if you change your mind; you can potentially cash out.
I was thinking specifically of the fees and not the life insurance. The Alcor fees are high enough that they’ll be worth paying only if I was fairly certain I’ll be in a freezable situation (which is likelier if I’m dying 50 years from now) and Alcor will still be the best option (which is unlikely given 50 years).
As for life insurance, I do have it right now because I don’t have the $50,000-$100,000 saved up that could be used to pay for cryopreservation. If I have the money saved up, I could afford to stop paying the premiums because life insurance has a net negative expectancy. At that point I’d rather keep exercising and eating veggies and keep the $100K in a safe mutual fund, waiting for the decision of how to cryopreserve to become more pressing.
Here’s an idea: instead of Alcor, why wouldn’t I name Eliezer the beneficiary of my life insurance policy with instructions to pay for my cryopreservation at the best affordable company available and take the remainder of the money for SIAI (as renumeration for his trouble)?
Do you think Eliezer writes so much (and so well) to increase the chances that future generations will be interested in reviving him? If you had the power, think who you would rather thaw first: a prolific 19th century philosopher that you read or an anonymous 19th century lawyer.
I guess the next thing after signing up for cryonics I should do is write a book :)
It’s not at all obvious to me that the marginal utility of $120/year (at a time where I’m extremely healthy, as part of a demographic that’s exceptionally long-lived) is greater than that of eg. 20 malarial nets (which is an absolute lower bound for any decision, there are ways that I think can leverage my donations significantly further). Can somebody clarify this intuition for me?
By the same reasoning, the marginal utility of any amount used to improve your health is greater than the marginal utility of using it on malaria nets (except insofar as improving your health lets you survive to produce more money for malaria nets). In fact, the same could be said about any expenditure on yourself whatsoever, whether health-related or not.
I continue to believe that EA is absurd, and it’s absurd for reasons like this. No individual alieves in EA; everyone says at some point that they could do more good by buying malaria nets but they’re going to spend some money on themselves anyway. Cryonics is not special in this regard compared to all the other ways of spending money on yourself, which you do do.
Though it amuses me to see one LW weird idea collide head on with another LW weird idea.
“By the same reasoning, the marginal utility of any amount used to improve your health is greater than the marginal utility of using it on malaria nets (except insofar as improving your health lets you survive to produce more money for malaria nets). In fact, the same could be said about any expenditure on yourself whatsoever, whether health-related or not.”
Your point being?
“Cryonics is not special in this regard compared to all the other ways of spending money on yourself, which you do do.”
I spend money on myself (less than you think, probably) because a) inertia bias and b) signaling/minimizing weirdness points. For a), Perhaps you and your peers are different, but I have substantially more self-control against starting new bad habits than quitting previous bad ones. Thus, it makes sense to apply greater scrutiny to newer actions I might do than to give up things that I emotionally find difficult to lose. If you are more rational than me on this point, I congratulate you.
For b), outside of this tiny microcosm and some affiliated places, cryonics is highly unlikely to bring me greater status/minimize my perceived weirdness. Indeed, my prior is very strongly that it has the opposite effect.
Thus, while having a selective demand for rigor for newer ideas may initially seem offputting to rationalists, I think it makes a lot of sense in practice to do so.
“Though it amuses me to see one LW weird idea collide head on with another LW weird idea.” I doubt Singer is a LW’er, and there are plenty of ethical people of an optimizing variety in the world before Singer or LW. EA as a term is closely tied with the LW-sphere, yes, but it’s really just a collection of obvious-in-retrospect ideas put together. I doubt more than a third of the current population of EA identifies as LW/rationalist (I certainly don’t), and I also strongly suspect that EA will outgrow or outlive LW, but I admit to some perhaps unjustifiable optimism on that front.
I am more rational than you on this point in that I conclude that, because EA makes such absurd demands, I should refuse to accept the premises that go into EA.If I don’t think it’s bad to spend money on myself, then rationality doesn’t demand that I stop spending money on myself in order to be good.
If you consider spending money on yourself to be a problem in the way you’ve described, you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). This means you have bitten too many bullets.
LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.
You may not identify as LW-rationalist, but you’re acting like a LW-rationalist.
“then rationality doesn’t demand that I stop spending money on myself in order to be good.” Well, yes, because whether you’re “being” good is somewhat irrelevant. Objective conditions of the world don’t change based on what you’re “being” ontologically, reality is affected by what you do.
My terminal goals involve the alleviation of suffering, with the minimization of bad habits being an instrumental goal. It so happens that spending money on cryogenics is unlikely to be the best way to solve this goal (or so it appears. No strong arguments have been made in its favor as of today, which is what I initially asked for).
“you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). ” Normality is not a terminal value of mine, and I doubt it is for you. Having a impossible goal to reach would be absurd IF success/failure is a binary case. But it really isn’t. There is so much suffering in the world that being halfway, or even a tenth of the way successful still means a lot of reduction of suffering in the world.
“LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.” Your argument is of the form A, B, C results in X, but A, B and not C results in Y, so “in practice” X and Y are associated. But this is bizarre when a lot of different things can result in Y, at best tangentially related to A and B, and completely independent of the truthiness of C. Plain ol’ egalitarianism comes to mind, as does Rawls and libertarian theology.
I will ignore the ad hominem.
You still have not addressed the point that adopting new behaviors is qualitatively different psychologically than getting rid of old ones. And from an ethical, non-egotistical perspective, this difference is quite significant.
That’s a semantics objection. Pretend that I said a more appropriate phrase instead of “being good”, such as “maximizing utility” or “doing what you should do”.
Normality serves as a sanity check against taking ideas seriously. Sanity checks aren’t terminal values.
You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.
Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.
“You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.” I feel like you’re making a pretty elementary subset error there…
“Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.” No, I currently see no inside view need to go for cryonics, emotionally or otherwise. There were enough people I respect who went for cryonics that my outside view was that they knew something I did not. This does not appear to be the case, and I see no reason to consider this further, at least until I grow substantially older or sicker. Nor do I see a need to continue this conversation.
Happy New Year.
I meant what I said. If 1⁄3 of X are Y, but X doesn’t have anywhere near a 1⁄3 prevalence in the general population or in other subgroups that are disproportionately Y for separate reasons, then it’s fair to say that X has a huge influence on Y.
The proper way to end a conversation is to just end it, not to say “this is why I am right, now that I am done saying that, I’ll end it”.
The implication of this is that you should just look at cryonics as one possible way to benefit yourself, but realize that there is no reason to criticize someone who doesn’t do it, just as you don’t criticize someone who doesn’t feel like buying himself ice cream.
Your double negatives are confusing me. :) Can you clarify?
It seems like this reason applies more to relatively wealthy people in developed countries. But in poorer countries where many people lack clean water, food, and electricity, cryonics is much more likely to be unaffordable and therefore finances are much more likely to be the dominant factor in one’s decision making.
If cryonics is just about affordable in the US, and most people in the world are poorer than the average US person (1), then cryonics could be unaffordable for much of the earth’s population.
I’m assuming that the target audience of the post was the kind of people who read LessWrong or other people in developed countries. In that case, this argument does not apply.
https://howrichami.givingwhatwecan.org/how-rich-am-i
Llifelogging as life extension version of this post would be like “You Only Live 1.5 Times” ^^