I don’t know if anyone picked up on this, but this to me somehow correlates with Eliezer Yudkowsky’s post on Normal Cryonics… if in reverse.
Eliezer was making a passionate case that not choosing cryonics is irrational, and that not choosing it for your children has moral implications. It’s made me examine my thoughts and beliefs about the topic, which were, I admit, ready-made cultural attitudes of derision and distrust.
Once you notice a cultural bias, it’s not too hard to change your reasoned opinion… but the bias usually piggy-backs on a deep-seated reptilian reaction. I find changing that reaction to be harder work.
All this to say that in the case of this tale, and of Eliezer’s lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).
Knowing that we will suffer, and knowing that we will die, are unbearable thoughts. We invest an enormous amount of energy toward dealing with the certainty of death and of suffering, as individuals, families, social groups, nations. Worlds in which we would not have to die, or not have to suffer, are worlds for which we have no useful skills or tools. Especially compared to the considerable arsenal of sophisticated technologies, art forms, and psychoses we’ve painstakingly evolved to cope with death.
That’s where I am right now. Eliezer’s comments have triggered a strongly rational dissonance, but I feel comfortable hanging around all the serious people, who are too busy doing the serious work of making the most of life to waste any time on silly things like immortality. Mostly, I’m terrified at the unfathomable enormity of everything that I’ll have to do to adapt to a belief in cryonics. I’ll have to change my approach to everything… and I don’t have any cultural references to guide the way.
Rationally, I know that most of what I’ve learned is useless if I have more time to live. Emotionally, I’m afraid to let go, because what else do I have?
Is this a matter of genetic programming percolating too deep into the fabric of all our systems, be they genetic, nervous, emotional, instinctual, cultural, intellectual? Are we so hard-wired for death that we physically can’t fathom or adapt to the potential for immortality?
I’m particularly interested in hearing about the experience of the LW community on this: How far can rational examination of life-extension possibilities go in changing your outlook, but also feelings or even instincts? Is there a new level of self-consciousness behind this brick wall I’m hitting, or is it pretty much brick all the way?
That was eloquent, but… I honestly don’t understand why you couldn’t just sign up for cryonics and then get on with your (first) life. I mean, I get that I’m the wrong person to ask, I’ve known about cryonics since age eleven and I’ve never really planned on dying. But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. Add the uncertain prospect of immortality and… not a whole lot changes so far as I can tell.
There’s all the people who believe in Heaven. Some of them are probably even genuinely sincere about it. They think they’ve got a certainty of immortality. And they still walk on two feet and go to work every day.
“But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. ”
Hm. I don’t see this at all. I see people planning college, kids, a career they can stand for 40 years, retirement, nursing care, writing wills, buying insurance, picking out cemetaries, all in order, all in a march toward the inevitable. People often talk about whether or not it’s “too late” to change careers or buy a house. People often talk about “passing on” skills or keepsakes or whatever to their children. Nearly everything we do seems like an adaptation to death to me.
People who believe in heaven believe that whatever they’re supposed to do in heaven is all cut out for them. There will be an orientation, God will give you your duties or pleasures or what have you, and he’ll see to it that they don’t get boring, because after all, this is a reward. And unlike in Avalot’s scenerio, the skills you gained in the first life are useful in the second, because God has been guiding you and all that jazz. There’s still a progression of birth to fufilment. (I say this as an ex-afterlife-believer).
On the other hand, many vampire and other stories are predicated on the fact that mundane immortality is terrifying. Who can stand a job for more than 40 years? Who has more than a couple dozen jobs they could imagine standing for 40 years each in succession? Wouldn’t they all start to seem pointless? What would you do with your time without jobs? Wouldn’t you meet the same sorts of stupid people over and over again until it drove you insane? Wouldn’t you get sick of the taste of every food? Even the Internet has made me more jaded than I’d like.
That’s my fear of cryogenics. That, and that imperfect science would cause me to have a brain rot that would make my new reanimated self crazy and suffering. But that one is a failure to visualize it working well, not an objection to it working well.
Most of the examples you stated have to do more with people fearing a “not so good life”—old age, reduced mental and physical capabilities etc., not necessarily death.
Not sure what you’re responding to. I never said anything about fearing death nor a not-so-good life, only immortality. And my examples (jadedness, boredom) have nothing to do with declining health.
Aside from all of the questions as to the scientific viability of resurrection through cryonics. I question the logistics of it. What assurance do you have that a cryonics facility will be operational long enough to see your remains get proper treatment? Or furthermore what recourse is there if the facility and the entity controlling it does in fact survive that it will provide the contracted services? If the facility has no legal liability might it not rationally choose to dispose of cryonically preserved bodies/individuals rather than reviving them.
I know that there is probably a a page somewhere explaining this, if so please feel free to provide in lieu of responding in depth.
You’re hanging off a cliff, on the verge of falling to your death. A stranger shows his face over the edge and offers you his hand. Is he strong enough to lift you? Will you fall before you reach his hand? Is he some sort of sadist that is going to push you once you’re safe, just to see your look of surprise as you fall?
The probabilities are different with cryonics, but the spirit of the calculation is the same. A non-zero chance of life, or a sure chance of death.
This sounds similar to pascal’s wager, and it has the same problems really. If you don’t see them I guess my response would be....
I have developed a very promising resurrection technology that works with greater reliability and less memory loss than cryonics. Paypal me $1,000 at shiftedshapes@gmail.com and note your name and social security number in the comments field and I will include you in the first wave of revivals.
only a fallacy if your assignment of probabilities here:
“And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.”
is accurate. I really don’t have the expertise to debate this with you. I hope that you are right!
I think the logistical issues discussed above will be the wrench in the works, unfortunately.
Logistical issues are my main concern over cryonics as well. I don’t really doubt that in principle the technology could one day exist to revive a frozen person, my doubts are much more about the likelihood of cryonic storage getting me there despite mundane risks like corporate bankruptcy, political upheaval, natural disasters, fires, floods, fraud, etc., etc.
Um… first of all, you’ve got a signed contract. Second, they screw over one customer and all their other customers leave. Same as for any other business. Focusing on this in particular sounds like a rationalization of a wiggy reaction.
The more reasonable question is the first one: do you think it’s likely that your chosen cryonics provider will remain financially solvent until resuscitation becomes possible?
I think it’s a legitimate concern, given the track record of businesses in general (although if quantum immortality reasoning applies anywhere, it has to apply to cryonic resuscitation, so it suffices to have some plausible future where the provider stays in business— which seems virtually certain to be the case).
It’s not the business going bust you have to worry about, it’s the patient care trust. My impression is that trusts do mostly last a long time, but I don’t know how best to get statistics on that.
yes there are a lot of issues. Probably the way to go is to look for a law review article on the subject. Someone with free lexis-nexis (or westlaw) could help here.
cryonics is about as far as you can get from a plain vanilla contractual issue. If you are going to invest a lot of money in it I hope that you investigate these pitfalls before putting down your cash Eliezer.
I have been looking into this at some length, and basically it appears that no-one has ever put work into understanding the details and come to a strongly negative conclusion. I would be absolutely astonished (around +20db) if there was a law review article dealing with specifically cryonics-related issues that didn’t come to a positive conclusion, not because I’m that confident that it’s good but because I’m very confident that no critic has ever put that much work in.
So, if you have a negative conclusion to present, please don’t dash off a comment here without really looking into it—I can already find plenty of material like that, and it’s not very helpful. Please, look into the details, and make a blog post or such somewhere.
I know you’re not Eliezer, I was addressing him because I assumed that he was the only one who had or was considering paying for cryonics here.
This site is my means of researching cryonics as I generally assume that motivated intelligent individuals such as yourselves will be equiped with any available facts to defend your positions. A sort of efficient information market hypothesis.
I also assume that I will not receive contracted services in situations where I lack leverage. This leverage could be litigation with a positive expected return or even better the threat of nonpayment. In the instance of cryonics all payments would have been made up front so the later does not apply. The chances of litigation success seem dim at first blush inlight of the issues mentioned in my posts above and below by mattnewport and others. I assumed that if there is evidence that cryonic contracts might be legally enforceable (from a perspective of legal realism) that you guys would have it here as you are smart and incentivized to research this issue (due to your financial and intellectual investment in it). The fact that you guys have no such evidence signals to me that it likely does not exist. This does not inspire me to move away from my initial skepticism wrt cryonics or to invest time in researching it.
So no I won’t be looking into the details based on what I have seen so far.
Frankly, you don’t strike me as genuinely open to persuasion, but for the sake of any future readers I’ll note the following:
1) I expect cryonics patients to actually be revived by artificial superintelligences subsequent to an intelligence explosion. My primary concern for making sure that cryonicists get revived is Friendly AI.
2) If this were not the case, I’d be concerned about the people running the cryonics companies. The cryonicists that I have met are not in it for the money. Cryonics is not an easy job or a wealthy profession! The cryonicists I have met are in it because they don’t want people to die. They are concerned with choosing successors with the same attitude, first because they don’t want people to die, and second because they expect their own revivals to be in their hands someday.
So you are willing to rely on the friendliness and competence of the cryonicists that you have met (at least to serve as stewards in the interim between your death and the emmergence of a FAI).
Well that is a personal judgment call for you to make.
You have got me all wrong. Really I was raising the question here so that you would be able to give me a stronger argument and put my doubts to rest precisely because I am interested in cryonics and do want to live forever. I posted in the hopes that I would be persuaded. Unfortunately, your personal faith in the individuals that you have met is not transferable.
It’d be best if names were attached to these hypothetical Mega Upvotes. You don’t normally want people to see your voting patterns, but if you’re upsetting the comment karma balance that much then it’d be best to have a name attached. Two kinds of currency would be clunky. There are other considerations that I’m too lazy to list out but generally they somewhat favor having names attached.
Are you out of shape and/or overweight? If so I will probably outlive you, why don’t you let me know what you would like on your tombstone.
How about the rest of you pro-cryonics individuals how many of you have latched onto this slim chance at immortality as a means of ignoring the consequences of your computer-bound Cheeto-eating lifestyle?
How about the rest of you pro-cryonics individuals how many of you have latched onto this slim chance at immortality as a means of ignoring the consequences of your computer-bound Cheeto-eating lifestyle?
The attitude tends to be more like: “Having your brain cryogenically preserved is the second worst thing that can happen to you.”
I run marathons, practice martial arts and work out at the gym 4 times a week. I dedicate a significant amount of my budget to healthy eating and optimal nutritional supplementation.
If you read through Alcor’s website, you’ll see that they are careful not to provide any promises and want their clients to be well-informed about the lack of any guarantees—this points to good intentions.
How convinced do you need to be to pay $25 a month? (I’m using the $300/year quote.)
If you die soon, you won’t have paid so much. If you don’t die soon, you can consider that you’re locking into a cheaper price for an option that might get more expensive once the science/culture is more established.
In 15 years, they might discover something that makes cryonics unlikely—and you might regret your $4,500 investment. Or they might revive a cryonically frozen puppy, in which case you would have been pleased that you were ‘cryonically covered’ the whole time, and possibly pleased you funded their research. A better cryonics company might come along, you might become more informed, and you can switch.
If you like the idea of it—and you seem to—why wouldn’t you participate in this early stage even when things are uncertain?
I need to be convinced that cryonics is better than nothing, and quite frankly I’m not.
For now I will stick to maintaining my good health through proven methods, maximizing my chances to live to see future advances in medicine. That seems to be the highest probability method of living practically forever, right? (and no I’m not trying to create a false-dilemma here, I know I could do both).
If cryonics were free and somebody else did all the work, I’m assuming you wouldn’t object to being signed up. So how cheap (in terms of both effort and money) would cryonics have to be in order to make it worthwhile for you?
at the level of confidence I have in it now I would not contribute any money, maybe $10 annual donation because i think it is a good cause.
If I was very rich I might contribute a large amount of money to cryonics research although I think I would rather spend on AGI or nanotech basic science.
I have a rather straightforward argument—well, I have an idea that I completely stole from someone else who might be significantly less confident of it than I am—anyway, I have an argument that there is a strong possibility, let’s call it 30% for kicks, that conditional on yer typical FAI FOOM outwards at lightspeed singularity, all humans who have died can be revived with very high accuracy. (In fact it can also work if FAI isn’t developed and human technology completely stagnates, but that scenario makes it less obvious.) This argument does not depend on the possibility of magic powers (e.g. questionably precise simulations by Friendly “counterfactual” quantum sibling branches), it applies to humans who were cremated, and it also applies to humans who lived before there was recorded history. Basically, there doesn’t have to be much of any local information around come FOOM.
Again, this argument is disjunctive with the unknown big angelic powers argument, and doesn’t necessitate aid from quantum siblings
You’ve done a lot of promotion of cryonics. There are good memetic engineering reasons. But are you really very confident that cryonics is necessary for an FAI to revive arbitrary dead human beings with ‘lots’ of detail? If not, is your lack of confidence taken into account in your seemingly-confident promotion of cryonics for its own sake rather than just as a memetic strategy to get folk into the whole ‘taking transhumanism/singularitarianism seriously’ clique?
I have a rather straightforward argument [...] anyway, I have an argument that there is a strong possibility [...] This argument does not depend on [...] Again, this argument is disjunctive with [...]
How foolish of you to ask. You’re supposed to revise your probability simply based on Will’s claim that he has an argument. That is how rational agreement works.
Bwa ha ha. I’ve already dropped way too many hints here and elsewhere, and I think it’s way too awesome for me to reveal given that I didn’t come up with it and there is a sharper more interesting more general more speculative idea that it would be best to introduce at the same time because the generalized argument leads to an that is even more awesome by like an order of magnitude (but is probably like an order of magnitude less probable (though that’s just from the addition of logical uncertainty, not a true conjunct)). (I’m kind of in an affective death spiral around it because it’s a great example of the kinds of crazy awesome things you can get from a single completely simple and obvious inferential step.)
Cryonics orgs that mistreat their patients lose their client base and can’t get new ones. They go bust. Orgs that have established a good record, like Alcor and the Cryonics Institute, have no reason to change strategy. Alcor has entirely separated the money for care of patients in an irrevocable trust, thus guarding against the majority of principal-agent problems, like embezzlement.
Note that Alcor is a charity and the CI is a non-profit. I have never assessed such orgs by how successfully I might sue them. I routinely look at how open they are with their finances and actions.
so explain to me how the breach gets litigated, e.g. who is the party that brings the suit and has the necessary standing, what is the contractual language, where is the legal precedent establishing the standard for dammages, and etc..
As for loss of business, I think it is likely that all of the customers might be dead before revival becomes feasible. In this case there is no business to be lost.
Dismissing my objection as a rationalization sounds like a means of maintaining your denial.
How about this analogy: if I sign up for travel insurance today then I needn’t necessarily spend the next week coming to terms with all the ghastly things that could happen during my trip. Perhaps the ideal rationalist would stare unblinkingly at the plethora of awful possibilities but if I’m going to be irrational and block my ears and eyes and not think about them then making the rational choice to get insurance is still a very positive step.
Alex, I see your point, and I can certainly look at cryonics this way… And I’m well on my way to a fully responsible reasoned-out decision on cryonics. I know I am, because it’s now feeling like one of these no-fun grown-up things I’m going to have to suck up and do, like taxes and dental appointments. I appreciate your sharing this “bah, no big deal, just get it done” attitude which is a helpful model at this point. I tend to be the agonizing type.
But I think I’m also making a point about communicating the singularity to society, as opposed to individuals. This knee-jerk reaction to topics like cryonics and AI, and to promises such as the virtual end of suffering… might it be a sort of self-preservation instinct of society (not individuals)? So, defining “society” as the system of beliefs and tools and skills we’ve evolved to deal with fore-knowledge of death, I guess I’m asking if society is alive, inasmuch as it has inherited some basic self-preservation mechanisms, by virtue of the sunk-cost fallacy suffered by the individuals that comprise it?
So you may have a perfectly no-brainer argument that can convince any individual, and still move nobody. The same way you can’t make me slap my forehead by convincing each individual cell in my hand to do it. They’ll need the brain to coordinate, and you can’t make that happen by talking to each individual neuron either. Society is the body that needs to move, culture its mind?
Generally, reasoning by analogy is not very well regarded here. But, nonetheless let me try to communicate.
Society doesn’t have a body other than people. Where societal norms have the greatest sway is when Individuals follow customs and traditions without thinking about them or get reactions that they cannot explain rationally.
Unfortunately, there is no way other than talking to and convincing individuals who are willing to look beyond those reactions and beyond those customs. Maybe they will slowly develop into a majority. Maybe all that they need is a critical mass beyond which they can branch into their own socio-political system. (As Peter Theil pointed out in one of his controversial talks)
All this to say that in the case of this tale, and of Eliezer’s lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).
“Rationally, I know that most of what I’ve learned is useless if I have more time to live. Emotionally, I’m afraid to let go, because what else do I have?”
I love this. But I think it’s rational as well as emotional to not be willing to let go of “everything you have”.
People who have experienced the loss of someone, or other tragedy, sometimes lose the ability to care about any and everything they are doing. It can all seem futile, depressing, unable to be shared with anyone important. How much more that would be true if none of what you’ve ever done will ever matter anymore.
I don’t know if anyone picked up on this, but this to me somehow correlates with Eliezer Yudkowsky’s post on Normal Cryonics… if in reverse.
Eliezer was making a passionate case that not choosing cryonics is irrational, and that not choosing it for your children has moral implications. It’s made me examine my thoughts and beliefs about the topic, which were, I admit, ready-made cultural attitudes of derision and distrust.
Once you notice a cultural bias, it’s not too hard to change your reasoned opinion… but the bias usually piggy-backs on a deep-seated reptilian reaction. I find changing that reaction to be harder work.
All this to say that in the case of this tale, and of Eliezer’s lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).
Knowing that we will suffer, and knowing that we will die, are unbearable thoughts. We invest an enormous amount of energy toward dealing with the certainty of death and of suffering, as individuals, families, social groups, nations. Worlds in which we would not have to die, or not have to suffer, are worlds for which we have no useful skills or tools. Especially compared to the considerable arsenal of sophisticated technologies, art forms, and psychoses we’ve painstakingly evolved to cope with death.
That’s where I am right now. Eliezer’s comments have triggered a strongly rational dissonance, but I feel comfortable hanging around all the serious people, who are too busy doing the serious work of making the most of life to waste any time on silly things like immortality. Mostly, I’m terrified at the unfathomable enormity of everything that I’ll have to do to adapt to a belief in cryonics. I’ll have to change my approach to everything… and I don’t have any cultural references to guide the way.
Rationally, I know that most of what I’ve learned is useless if I have more time to live. Emotionally, I’m afraid to let go, because what else do I have?
Is this a matter of genetic programming percolating too deep into the fabric of all our systems, be they genetic, nervous, emotional, instinctual, cultural, intellectual? Are we so hard-wired for death that we physically can’t fathom or adapt to the potential for immortality?
I’m particularly interested in hearing about the experience of the LW community on this: How far can rational examination of life-extension possibilities go in changing your outlook, but also feelings or even instincts? Is there a new level of self-consciousness behind this brick wall I’m hitting, or is it pretty much brick all the way?
That was eloquent, but… I honestly don’t understand why you couldn’t just sign up for cryonics and then get on with your (first) life. I mean, I get that I’m the wrong person to ask, I’ve known about cryonics since age eleven and I’ve never really planned on dying. But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. Add the uncertain prospect of immortality and… not a whole lot changes so far as I can tell.
There’s all the people who believe in Heaven. Some of them are probably even genuinely sincere about it. They think they’ve got a certainty of immortality. And they still walk on two feet and go to work every day.
“But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. ”
Hm. I don’t see this at all. I see people planning college, kids, a career they can stand for 40 years, retirement, nursing care, writing wills, buying insurance, picking out cemetaries, all in order, all in a march toward the inevitable. People often talk about whether or not it’s “too late” to change careers or buy a house. People often talk about “passing on” skills or keepsakes or whatever to their children. Nearly everything we do seems like an adaptation to death to me.
People who believe in heaven believe that whatever they’re supposed to do in heaven is all cut out for them. There will be an orientation, God will give you your duties or pleasures or what have you, and he’ll see to it that they don’t get boring, because after all, this is a reward. And unlike in Avalot’s scenerio, the skills you gained in the first life are useful in the second, because God has been guiding you and all that jazz. There’s still a progression of birth to fufilment. (I say this as an ex-afterlife-believer).
On the other hand, many vampire and other stories are predicated on the fact that mundane immortality is terrifying. Who can stand a job for more than 40 years? Who has more than a couple dozen jobs they could imagine standing for 40 years each in succession? Wouldn’t they all start to seem pointless? What would you do with your time without jobs? Wouldn’t you meet the same sorts of stupid people over and over again until it drove you insane? Wouldn’t you get sick of the taste of every food? Even the Internet has made me more jaded than I’d like.
That’s my fear of cryogenics. That, and that imperfect science would cause me to have a brain rot that would make my new reanimated self crazy and suffering. But that one is a failure to visualize it working well, not an objection to it working well.
Most of the examples you stated have to do more with people fearing a “not so good life”—old age, reduced mental and physical capabilities etc., not necessarily death.
Not sure what you’re responding to. I never said anything about fearing death nor a not-so-good life, only immortality. And my examples (jadedness, boredom) have nothing to do with declining health.
Aside from all of the questions as to the scientific viability of resurrection through cryonics. I question the logistics of it. What assurance do you have that a cryonics facility will be operational long enough to see your remains get proper treatment? Or furthermore what recourse is there if the facility and the entity controlling it does in fact survive that it will provide the contracted services? If the facility has no legal liability might it not rationally choose to dispose of cryonically preserved bodies/individuals rather than reviving them.
I know that there is probably a a page somewhere explaining this, if so please feel free to provide in lieu of responding in depth.
There are no assurances.
You’re hanging off a cliff, on the verge of falling to your death. A stranger shows his face over the edge and offers you his hand. Is he strong enough to lift you? Will you fall before you reach his hand? Is he some sort of sadist that is going to push you once you’re safe, just to see your look of surprise as you fall?
The probabilities are different with cryonics, but the spirit of the calculation is the same. A non-zero chance of life, or a sure chance of death.
This sounds similar to pascal’s wager, and it has the same problems really. If you don’t see them I guess my response would be....
I have developed a very promising resurrection technology that works with greater reliability and less memory loss than cryonics. Paypal me $1,000 at shiftedshapes@gmail.com and note your name and social security number in the comments field and I will include you in the first wave of revivals.
http://lesswrong.com/lw/z0/the_pascals_wager_fallacy_fallacy/
only a fallacy if your assignment of probabilities here:
“And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.”
is accurate. I really don’t have the expertise to debate this with you. I hope that you are right!
I think the logistical issues discussed above will be the wrench in the works, unfortunately.
Logistical issues are my main concern over cryonics as well. I don’t really doubt that in principle the technology could one day exist to revive a frozen person, my doubts are much more about the likelihood of cryonic storage getting me there despite mundane risks like corporate bankruptcy, political upheaval, natural disasters, fires, floods, fraud, etc., etc.
For small enough probabilities the spirit of the calculation does change. That’s true. You then have to factor in the utility of the money spent.
ETA: that factor exists even with non-small probabilities, it just tends to be swamped by the other terms.
How does it work?
very well so far.
oh and it uses technology.
We have discussed Pascal’s Wager in depth here. Read the archives.
Um… first of all, you’ve got a signed contract. Second, they screw over one customer and all their other customers leave. Same as for any other business. Focusing on this in particular sounds like a rationalization of a wiggy reaction.
The more reasonable question is the first one: do you think it’s likely that your chosen cryonics provider will remain financially solvent until resuscitation becomes possible?
I think it’s a legitimate concern, given the track record of businesses in general (although if quantum immortality reasoning applies anywhere, it has to apply to cryonic resuscitation, so it suffices to have some plausible future where the provider stays in business— which seems virtually certain to be the case).
It’s not the business going bust you have to worry about, it’s the patient care trust. My impression is that trusts do mostly last a long time, but I don’t know how best to get statistics on that.
yes there are a lot of issues. Probably the way to go is to look for a law review article on the subject. Someone with free lexis-nexis (or westlaw) could help here.
cryonics is about as far as you can get from a plain vanilla contractual issue. If you are going to invest a lot of money in it I hope that you investigate these pitfalls before putting down your cash Eliezer.
I’m not Eliezer.
I have been looking into this at some length, and basically it appears that no-one has ever put work into understanding the details and come to a strongly negative conclusion. I would be absolutely astonished (around +20db) if there was a law review article dealing with specifically cryonics-related issues that didn’t come to a positive conclusion, not because I’m that confident that it’s good but because I’m very confident that no critic has ever put that much work in.
So, if you have a negative conclusion to present, please don’t dash off a comment here without really looking into it—I can already find plenty of material like that, and it’s not very helpful. Please, look into the details, and make a blog post or such somewhere.
I know you’re not Eliezer, I was addressing him because I assumed that he was the only one who had or was considering paying for cryonics here.
This site is my means of researching cryonics as I generally assume that motivated intelligent individuals such as yourselves will be equiped with any available facts to defend your positions. A sort of efficient information market hypothesis.
I also assume that I will not receive contracted services in situations where I lack leverage. This leverage could be litigation with a positive expected return or even better the threat of nonpayment. In the instance of cryonics all payments would have been made up front so the later does not apply. The chances of litigation success seem dim at first blush inlight of the issues mentioned in my posts above and below by mattnewport and others. I assumed that if there is evidence that cryonic contracts might be legally enforceable (from a perspective of legal realism) that you guys would have it here as you are smart and incentivized to research this issue (due to your financial and intellectual investment in it). The fact that you guys have no such evidence signals to me that it likely does not exist. This does not inspire me to move away from my initial skepticism wrt cryonics or to invest time in researching it.
So no I won’t be looking into the details based on what I have seen so far.
Frankly, you don’t strike me as genuinely open to persuasion, but for the sake of any future readers I’ll note the following:
1) I expect cryonics patients to actually be revived by artificial superintelligences subsequent to an intelligence explosion. My primary concern for making sure that cryonicists get revived is Friendly AI.
2) If this were not the case, I’d be concerned about the people running the cryonics companies. The cryonicists that I have met are not in it for the money. Cryonics is not an easy job or a wealthy profession! The cryonicists I have met are in it because they don’t want people to die. They are concerned with choosing successors with the same attitude, first because they don’t want people to die, and second because they expect their own revivals to be in their hands someday.
So you are willing to rely on the friendliness and competence of the cryonicists that you have met (at least to serve as stewards in the interim between your death and the emmergence of a FAI).
Well that is a personal judgment call for you to make.
You have got me all wrong. Really I was raising the question here so that you would be able to give me a stronger argument and put my doubts to rest precisely because I am interested in cryonics and do want to live forever. I posted in the hopes that I would be persuaded. Unfortunately, your personal faith in the individuals that you have met is not transferable.
Rest In Peace
1988 − 2016
He died signalling his cynical worldliness and sophistication to his peers.
It’s at times like this that I wish Less Wrong gave out a limited number of Mega Upvotes so I could upvote this 10 points instead of just 1.
It’d be best if names were attached to these hypothetical Mega Upvotes. You don’t normally want people to see your voting patterns, but if you’re upsetting the comment karma balance that much then it’d be best to have a name attached. Two kinds of currency would be clunky. There are other considerations that I’m too lazy to list out but generally they somewhat favor having names attached.
Are you out of shape and/or overweight? If so I will probably outlive you, why don’t you let me know what you would like on your tombstone.
How about the rest of you pro-cryonics individuals how many of you have latched onto this slim chance at immortality as a means of ignoring the consequences of your computer-bound Cheeto-eating lifestyle?
The attitude tends to be more like: “Having your brain cryogenically preserved is the second worst thing that can happen to you.”
I run marathons, practice martial arts and work out at the gym 4 times a week. I dedicate a significant amount of my budget to healthy eating and optimal nutritional supplementation.
good for you, except for the marathons of course, those are terrible for you.
I guess it is the type of thing I would like to do before I die though.
If you read through Alcor’s website, you’ll see that they are careful not to provide any promises and want their clients to be well-informed about the lack of any guarantees—this points to good intentions.
How convinced do you need to be to pay $25 a month? (I’m using the $300/year quote.)
If you die soon, you won’t have paid so much. If you don’t die soon, you can consider that you’re locking into a cheaper price for an option that might get more expensive once the science/culture is more established.
In 15 years, they might discover something that makes cryonics unlikely—and you might regret your $4,500 investment. Or they might revive a cryonically frozen puppy, in which case you would have been pleased that you were ‘cryonically covered’ the whole time, and possibly pleased you funded their research. A better cryonics company might come along, you might become more informed, and you can switch.
If you like the idea of it—and you seem to—why wouldn’t you participate in this early stage even when things are uncertain?
I need to be convinced that cryonics is better than nothing, and quite frankly I’m not.
For now I will stick to maintaining my good health through proven methods, maximizing my chances to live to see future advances in medicine. That seems to be the highest probability method of living practically forever, right? (and no I’m not trying to create a false-dilemma here, I know I could do both).
If cryonics were free and somebody else did all the work, I’m assuming you wouldn’t object to being signed up. So how cheap (in terms of both effort and money) would cryonics have to be in order to make it worthwhile for you?
yeah for free would be fine.
at the level of confidence I have in it now I would not contribute any money, maybe $10 annual donation because i think it is a good cause.
If I was very rich I might contribute a large amount of money to cryonics research although I think I would rather spend on AGI or nanotech basic science.
I have a rather straightforward argument—well, I have an idea that I completely stole from someone else who might be significantly less confident of it than I am—anyway, I have an argument that there is a strong possibility, let’s call it 30% for kicks, that conditional on yer typical FAI FOOM outwards at lightspeed singularity, all humans who have died can be revived with very high accuracy. (In fact it can also work if FAI isn’t developed and human technology completely stagnates, but that scenario makes it less obvious.) This argument does not depend on the possibility of magic powers (e.g. questionably precise simulations by Friendly “counterfactual” quantum sibling branches), it applies to humans who were cremated, and it also applies to humans who lived before there was recorded history. Basically, there doesn’t have to be much of any local information around come FOOM.
Again, this argument is disjunctive with the unknown big angelic powers argument, and doesn’t necessitate aid from quantum siblings
You’ve done a lot of promotion of cryonics. There are good memetic engineering reasons. But are you really very confident that cryonics is necessary for an FAI to revive arbitrary dead human beings with ‘lots’ of detail? If not, is your lack of confidence taken into account in your seemingly-confident promotion of cryonics for its own sake rather than just as a memetic strategy to get folk into the whole ‘taking transhumanism/singularitarianism seriously’ clique?
And that argument is … ?
How foolish of you to ask. You’re supposed to revise your probability simply based on Will’s claim that he has an argument. That is how rational agreement works.
Actually, rational agreement for humans involves betting. I’d like to find a way to bet on this one. AI-box style.
Bwa ha ha. I’ve already dropped way too many hints here and elsewhere, and I think it’s way too awesome for me to reveal given that I didn’t come up with it and there is a sharper more interesting more general more speculative idea that it would be best to introduce at the same time because the generalized argument leads to an that is even more awesome by like an order of magnitude (but is probably like an order of magnitude less probable (though that’s just from the addition of logical uncertainty, not a true conjunct)). (I’m kind of in an affective death spiral around it because it’s a great example of the kinds of crazy awesome things you can get from a single completely simple and obvious inferential step.)
Cryonics orgs that mistreat their patients lose their client base and can’t get new ones. They go bust. Orgs that have established a good record, like Alcor and the Cryonics Institute, have no reason to change strategy. Alcor has entirely separated the money for care of patients in an irrevocable trust, thus guarding against the majority of principal-agent problems, like embezzlement.
Note that Alcor is a charity and the CI is a non-profit. I have never assessed such orgs by how successfully I might sue them. I routinely look at how open they are with their finances and actions.
so explain to me how the breach gets litigated, e.g. who is the party that brings the suit and has the necessary standing, what is the contractual language, where is the legal precedent establishing the standard for dammages, and etc..
As for loss of business, I think it is likely that all of the customers might be dead before revival becomes feasible. In this case there is no business to be lost.
Dismissing my objection as a rationalization sounds like a means of maintaining your denial.
How about this analogy: if I sign up for travel insurance today then I needn’t necessarily spend the next week coming to terms with all the ghastly things that could happen during my trip. Perhaps the ideal rationalist would stare unblinkingly at the plethora of awful possibilities but if I’m going to be irrational and block my ears and eyes and not think about them then making the rational choice to get insurance is still a very positive step.
Alex, I see your point, and I can certainly look at cryonics this way… And I’m well on my way to a fully responsible reasoned-out decision on cryonics. I know I am, because it’s now feeling like one of these no-fun grown-up things I’m going to have to suck up and do, like taxes and dental appointments. I appreciate your sharing this “bah, no big deal, just get it done” attitude which is a helpful model at this point. I tend to be the agonizing type.
But I think I’m also making a point about communicating the singularity to society, as opposed to individuals. This knee-jerk reaction to topics like cryonics and AI, and to promises such as the virtual end of suffering… might it be a sort of self-preservation instinct of society (not individuals)? So, defining “society” as the system of beliefs and tools and skills we’ve evolved to deal with fore-knowledge of death, I guess I’m asking if society is alive, inasmuch as it has inherited some basic self-preservation mechanisms, by virtue of the sunk-cost fallacy suffered by the individuals that comprise it?
So you may have a perfectly no-brainer argument that can convince any individual, and still move nobody. The same way you can’t make me slap my forehead by convincing each individual cell in my hand to do it. They’ll need the brain to coordinate, and you can’t make that happen by talking to each individual neuron either. Society is the body that needs to move, culture its mind?
Generally, reasoning by analogy is not very well regarded here. But, nonetheless let me try to communicate.
Society doesn’t have a body other than people. Where societal norms have the greatest sway is when Individuals follow customs and traditions without thinking about them or get reactions that they cannot explain rationally.
Unfortunately, there is no way other than talking to and convincing individuals who are willing to look beyond those reactions and beyond those customs. Maybe they will slowly develop into a majority. Maybe all that they need is a critical mass beyond which they can branch into their own socio-political system. (As Peter Theil pointed out in one of his controversial talks)
See the links on http://wiki.lesswrong.com/wiki/Sunk_cost_fallacy
“Rationally, I know that most of what I’ve learned is useless if I have more time to live. Emotionally, I’m afraid to let go, because what else do I have?”
I love this. But I think it’s rational as well as emotional to not be willing to let go of “everything you have”.
People who have experienced the loss of someone, or other tragedy, sometimes lose the ability to care about any and everything they are doing. It can all seem futile, depressing, unable to be shared with anyone important. How much more that would be true if none of what you’ve ever done will ever matter anymore.