Do you foresee, that there will be a day in your future, when you will prefer to die on that day over living to see the next one?
Yes, I think that this is quite possible. However, the reasons are, as you say below ‘the weaknesses and diseases of old age’, so they’re not really relevant.
I can also easily imagine that I will never want to die. I can easily imagine that, as health care improves ahead of my aging, many of the people who are alive now will live forever, and I will also. That would be fine.
But cryonics is different. Here, you are asking me to take a break of time during which technology advances far beyond what it is today, not to live into the future one day at a time. That does not interest me.
I’m not even interested in being revived from a coma after several years, using only contemporary technology. Certainly I don’t consider it worth the expense. In fact, the main reason that I don’t sign up for DNR now is that I know some people who would suffer if I did not at least outlive them (plus the bother of signing up, although at least it costs nothing).
But I think that your question may be a good one to ask other people who have come to terms with death and thereby find cryonics unappealing. Ask when, after a short or long period of apparent death, they would not want to be revived. For me, that time comes when the people that I care about are no longer around and the things that interest me are no longer current. But I can imagine that some other people would realise that the answer is never and decide to sign up.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me even though I share your conclusion.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so.
I don’t see these as different reasons, but two components of a point of view that hasn’t been fully articulated. However, I share it. To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
This seems somewhat analogous to me to a situation where war or natural disaster destroys your home and kills most of your friends and family but you escape and have the opportunity to start afresh in a new and unfamiliar culture. Now that obviously sounds like a pretty unpleasant situation and vastly less preferable than if the war or disaster never occurred but I would still prefer to survive than to die. I can imagine that some people might feel differently when presented with that choice however.
byrnema agreeing with TobyBartels in wanting always to live another day, but being indifferent to his or her own cryonic suspension and revival:
To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
I, too, want always to live another day (unless my health gets so bad I no longer pay attention to anything or anyone except myself and my pain) but am indifferent to my being cryonically revived.
The way I place these two aspects of my desire into a coherent view is to note that I am useful to the world now (and tomorrow, and next year). In fact, on any given day, the way for me to maximize my usefulness to the world will almost certainly be for me to try to keep on living and to stay as healthy as possible because the source of almost all “wealth” or “usefulness” is human creativity (that is, human intelligence combined with a sincere desire to be useful to the world) and a human’s creativity ends when his or her life ends.
Now if I were to be cryonically suspended and then revived, it is almost certain that an intelligence explosion—more precisely an explosion of engineered intelligences which will be much more useful (for whatever purpose to which they are put) than any human intelligence ever was—has taken place because that is the only thing anyone can think of that would make possible my revival. But I would not be able to help engineered intelligences improved the world: with my relatively puny intelligence, I would just get in their way. Oh, sure, the machines could radically improve my intelligence, making me a “transhuman”, but from where I stand now, this strategy of continuing my usefulness into the post-human era by becoming a transhuman has very low expected utility relative to my simply leaving the post-human era up to the machines (and transhumans other than me) and a much better application of my resources is for me to try to increase the probability of a good intelligence explosion.
In other words, I see the fact that no one has yet figured out how to transform ordinary matter like you would find in a chair or a silcon wafer into an engineered intelligence superior to human intelligence to be “good news” in the sense that it gives me an opportunity to be useful. (And I am relatively indifferent to the fact that I am suffering a lot more than I would be if someone had already figured out how to make engineered intelligences.) For the same reason, if offered the chance to be teleported back in time 2000 years, I would take it (even if I could not take any of the wealth, tech or knowledge of the past 2000 years with me) because that would increase the expected usefulness of my intelligence and of my simple human life.
So that is why I am indifferent to my own cryonic suspension and revival:: even though I will probably be just as intelligent and just as interested in improving the world after my revival as I am now, it will not matter the way it matters now because my relative ability to improve the world will be less.
In other words, the way I form my wants into a coherent “explanation” or “system” is to say that I am interested in living as long as practical before the intelligence explosion, but approximately indifferent to continuing my life after it. And from that indiffernnce flow my indifference to my being cryonically suspended and revived.
Another thing: I have found that I able to take coherent actions over extended periods of time to keep on living, but I have not been able to make any non-neglible effort towards my being successfully suspended and revived. In other words, there is a sense in which the values that I have are an empirical matter not under my control. If mattnewport or someone else is able to make a reply to this comment which points out an inconsistency in the “explantion” or “system” I have given above, well, I might be chagrined (because most people do not like to admit in public to holding inconsistent goals or values) but I will probably continue to be unable to motivate myself to take effortful actions over a long period of time to maximize the probability of my successful suspension and revival.
In other words, if as a result of a debate here on Less Wrong (and out of a desire to appear consistent and stable in my goals and values, out of a desire to be accepted by the cool and wonderful people of SIAI or out of some other desire) I were to announce that I have “changed my mind” and I now believe that my being suspended and revived is the right thing for me to choose, I do not see why anyone would care all that much. My desire for my actions and choices to be consistent with my professed values will in that case probably cause me to sign up for cryonics if the cost of signing up is low, but to be honest with you, it is extremely unlikely that anything would cause me to work hard over a long period of time to ensure my suspension and revival because I just do not care and I doubt severely that I could make myself care even if I wanted to make myself care. (There are many goals and values I have unsuccessfully tried to make myself care about.)
One more thing: a lot of people (mostly those who are “motivated” and consequently able to take effortful actions over a long period of time about cryonics) will probably take what I just said as significant evidence that I cannot be trusted. I am not big on networking, so this is based on observations (from public internet messages and from face-to-face conversations) of only a handful of people, so it could be wrong.
And in turn for many years I assumed (without any good argument to back it up) signing up for cryonics (unless it is done for instrumental reasons like Eliezer did it) was significant evidence that the person should not be trusted. Yes: I took the fact that someone shared my indifferent to keeping on living for its own sake rather for the sake of keeping on being useful to the world as significant evidence that I could trust that person. I have recently abandoned that opinion (in favor of vague agnosticism) because most (all) of the extremely strong rationalists I know who had taken the trouble to inform themselves about cryonics the intelligence explosion had the opposite opinion (and because my having been confronted with that fact caused me to notice that I had no real basis for my opinion). But note people with the opposite opinion probably will not convert to agnosticism like me because among extremely strong rationalists with the necessary information to form an opinion, they are in the majority (even though of course they are in a small minority of the general population). So, maybe I am being too cautionary, but I would tend to advise Toby and byrnema, if they are planning on applying for a visiting fellowship at SIAI, FHI or such and want to be very cautious, to refrain from continuing to post about their values as they apply to cryonics.
Regarding being useful, this is something I strongly identify with. I am not a highly self-interest valuing person, though I see self-interest as itself being a useful value (in the right context). I find that I am more motivated to sign up for cryonics when I look at it as an example to set for others than when I look at it as a way to save my own skin. I am essentially more motivated to support than (directly) to adopt cryonics.
Presumably, a well written CEV optimizer would see our desire to be useful and put us in situations where we can be useful. However, I think it’s worth noting that there is quite a bit of wriggle room between the invention of fooming AGI and the development of reversal mechanisms for cryonic suspension. Reversal for cryonic suspension could end up being something that is specifically and painstakingly developed over the course of several decades by humans with normal levels of intelligence.
So far as trustworthiness is concerned, iterated prisoner’s dilemna suggests that an expectation of ongoing transactions leads to trustworthiness. So, signing up for cryonics implies being slightly more trustworthy than not.
The prisoner’s dilemma is only one part of the human landscape, and no one has argued that it will prove the most decisive part. Well, Robert Wright comes close by having chosen to write an entire book on the iterated prisoner’s dilemma, but that book’s analysis of the causes of increasing human wealth completely neglects the wealth-increasing potential of an explosion of engineered intelligence.
I can make the counterargument that the more resources a person needs to fulfill his desires, the more likely the person is to impose harm on a fellow human being to acquire those resources and that for example I have little to fear from the proverbial Buddhist mystic who is perfectly happy to sit under a tree day after day while his body decays right in front of him. My simple counterargument implies that everything else being equal, I have more to fear from the one who desires cryonic suspension and revival than the one who does not.
But the more important point is that human trustworthiness is a very tricky subject. For example, your simple argument fails to explain why old people and others who know they are near the end of their lives are less likely to steal, defraud or cheat than young healthy people are.
I do not claim to know which of the groups under discussion is more trustworthy. (My only reason for advancing my counterargument was to reduce confidence in the insightfulness or your argument.) I am just passing along my tentative belief obtained from discussions with and reading of other singularitarians and cryonicists and obtained from observation of the evolution of my own beliefs about human trustworthiness that people tend to think they know. They think it is the same group they belong to.
You appear to have completely abandoned your original reason
I was answering the question that Martin asked. I stand by my old reason for not signing up.
Actually, it’s not so much that I have a reason for not signing up, as that I have no reason for signing up. So in my original post, I addressed what seemed to be the obvious reason for signing up: that one would hold long life of value in itself, which I don’t. Then Martin suggested another reason (that on any given day, I would want to live another day), so I addressed that one.
If you were signing up for a health-insurance program which included coverage for cryonics by default, along with other available treatments for severe injuries, would you opt out of that part of the coverage, and ask to be embalmed or cremated rather than frozen? What if it cost extra to do so?
Probably not. I wouldn’t seek out such a plan, and the way things are now, such a plan would cost far more than other plans, so I wouldn’t buy it. But things may be different in the future.
What I mean is, if the plan which otherwise provided all the benefits you wanted for the least cost also included cryonics (as some sort of silly package deal, due to market forces otherwise beyond your understanding) how much would it be worth to you to have the opportunity to randomly get hit by a bus someday and not wake up at all?
Not much, and possibly a negative amount (meaning that I’d prefer the cyronics coverage); I’ll have to think about it when the time comes.
Really, a lot depends on whether my relatives and friends have also signed up for cyronics. If the situation you describe ever exists, it will probably only be when cryonics has become normal, in which case it’s much more likely that I will want it for myself, thanks to having friends waiting for me in the future.
Heck, getting involved in Less Wrong meet-ups might be enough! I find that hard to predict (and unlikely to be tested soon, given where I live and how full my social life is now).
I no longer desired to live forever. I didn’t even desire to live longer than about a century.
And then when Martin asked if you foresee a day in the future where you would prefer to die than live another day you said:
I can also easily imagine that I will never want to die. I can easily imagine that, as health care improves ahead of my aging, many of the people who are alive now will live forever, and I will also. That would be fine.
Which suggests that you either do now desire to live forever or are at least comfortable with the idea of doing so. It looked to me like you changed your mind on the question of whether you would actually want to live forever after all but maybe this was a misinterpretation of your position.
You almost seem to be viewing the question of whether you value a long life as fundamentally different to the question of whether you would want to continue living on any given future day. This seems bizarre to me.
I no longer desired to live forever. I didn’t even desire to live longer than about a century.
That’s just part of my history. I carefully put it in the past tense.
Then I wrote a paragraph saying that I no longer had any particular opinion as to how long I should live, that I would just see it day by day. Actually, the paragraph covered more than that, including how I transitioned from a feeling that a century was about right to the idea that it was silly to judge such things. But then, on proofreading my original post, I cut that paragraph. So now my original post reads
[…] I didn’t even desire to live longer than about a century.
And since I no longer desire to live so long, […]
The transition from past tense to present tense is not very clear there, for which I apologise.
But currently I have no particular desire about my length of life. I could make a prediction, based on what is likely to happen in the future and what I am likely to want, as to whether I will always want to live a bit longer, and if I predict that I will, then I could say now that I want to live forever. But signing up for cryonics now would not help me achieve any of the wishes that I anticipate having in the future, because that’s not how I’ll want to live longer. (And if this prediction is wrong, then I can sign up later.)
You almost seem to be viewing the question of whether you value a long life as fundamentally different to the question of whether you would want to continue living on any given future day. This seems bizarre to me.
In that case, taboo wanting to live forever. For some people, that seems to be a value for its own sake; I think that it was for me once. But now I’m rational like you, and I only want to live forever if I’ll forever want to live. So the only question is whether I want, assuming that I get hit by a bus today, to wake up a hundred years later. And I don’t particularly.
But once upon a time, I really wanted to live forever, because I liked the idea of living forever. In holding this idea, wasn’t thinking about whether some day I would like to die; it was, if not a terminal value in its own right, something close to that. Furthermore, death was scary and unknown, and I was taught about Heaven and Hell; even after I realised that this was a fairy tale, I harbored an idea that death was bad in and of itself. There are probably good evolutionary reasons why somebody would feel this way.
Once I was cured of all that, however, anything that might have made cryonics inviting was gone. That was the point of my original post.
The only question is whether I want, assuming that I get hit by a bus today, to wake up a hundred years later. And I don’t.
This is a really pithy and compelling way of putting this. I definitely have, at a gut level, a desire to wake up tomorrow. But I don’t even have at that same gut level a desire to come out of a coma 20 years from now. Cryonics presses my survival instinct even more gently.
(Edit: I see that Bartels made the coma analogy a few comments up. Excuse the redundancy, or take it for emphasis.)
I find it strange how society at large frowns upon cryo, while also not doing a serious effort to prolong the healthy lifespan (wallbangerific). But on the other side frowns upon suicide.
I also usually avoid the topic due to its iffyness, and i am not signed up myself yet, so its basically armchairing anyway.
I think Matt got a point. And of course if you go into the search for your real reasons all kinds of bad things might happen for you.
But what jumped me, was that a long lifespan is fine, while a long lifespan with a coma/pause in the middle is not. I dont get that.
Of course cryo people would love to take their loved ones with them, and are horrified when they ignore the chance.
while also not doing a serious effort to prolong the healthy lifespan (wallbangerific)
I agree with that! I’m interested in the work by Aubrey de Grey. It’s not useful to me now, but I predict that someday it will be.
But what jumped me, was that a long lifespan is fine, while a long lifespan with a coma/pause in the middle is not. I dont get that.
Well, I don’t suppose that there are many people who feel that way. If you can get across the idea that cryonics is a way of turning one’s death into a very long coma, then that may help make it more attractive.
But I get up in the morning because there are things that I left unfinished the day before. By the time that I am revived from cryonics, they will all be finished.
Of course cryo people would love to take their loved ones with them, and are horrified when they ignore the chance.
If my loved ones signed up for cryonics, that would be reason enough for me.
Well, I don’t suppose that there are many people who feel that way. If you can get across the idea that cryonics is a way of turning one’s death into a very long coma, then that may help make it more attractive.
Yes. Exploring how people would feel about a very long coma could be a good way of exploring how they feel about cryonics-minus-the-creep-factor. In other words, if they didn’t have the psychological obstacles centered around cryonics, how would they really feel about it?
It is a horrendous case of a sub-optimal equilibrium in a coordination game. You know, one of the examples of game theory that isn’t the @#%@ Prisoner’s Dilemma.
I’ve been spending so much of my social time among people who treat swearing with complete nonchalance that I had forgotten how much power even a censored swearword can have in a setting where it is never used.
Yes, I’m wrong about this being prisoner’s dilemma. One side defecting (dieing) against other cooperating (cryopreserving) won’t make first side better off and second one worse off.
So it’s just insufficient communication/coordination.
We could also consider a game that was perhaps a step closer to the original. Leave cooperation and defection (and their PD laden connotations) behind and just consider a pair who would sincerely rather die than live on without the other. This is plausible even without trawling Shakespeare for fictional examples. Here on lesswrong I have seen the prospect of living on via cryopreservation without a friend joining them described as ‘extrovert hell’. Then, as you say, a ‘cremation’ equilibrium would be the result of insufficient communication/coordination. A high time and or emotional cost to transitioning preferences would contribute to that kind of undesirable outcome.
Incidentally, if we were to consider pair-cryo selection as an actual PD the most obvious scenario seems to be one in which both parties are overwhelmingly spiteful. Cryopreservation is the defect option. Where life is preferred but it is far more important to ensure that the other dies.
If my loved ones signed up for cryonics, that would be reason enough for me.
What a horrendous case of prisoner’s dilemma...
Not really. If any of my loved ones were at all interested in cryonics, then we could discuss it and choose to sign up together. In the Prisoner’s Dilemma, you don’t know what your counterpart is doing.
I also usually avoid the topic due to its iffyness, and i am not signed up myself yet, so its basically armchairing anyway.
I am starting to wonder if there needs to be more of a recognized social niche for cryo supporters who aren’t signed up themselves (or whose arrangements simply are not public).
My niche is young people with little money living in Europe.
To sign up I need to a) make money—which will happen soon b) figure out how the necessary arrangements for germany regarding transport, legal and what not, c) get the paperwork.
After the writing the earlier comment I got another reality-shock about how stupid it is that society at large doesnt jump on the longevity issue. Way worse than smoking....
I’m not even interested in being revived from a coma after several years, using only contemporary technology.
Me neither. I would like to write a clause that I am awoken only if a living relative feels like they need me. This may seem like a cheat, because it’s very unlikely that a child or a grandchild won’t want to revive me, but the truth is, I would be content to leave it in their hands. There is no value to my life beyond my immediate network of connections. If I am awoken in 200 years to a world that doesn’t know me, I might as well be someone else, and I don’t mind being someone else. There’s no difference between my experience of ‘I’ and the one that will develop in some number of years in a newly born baby.
Indeed. It always amazes me how successful the meme of self-sacrifice has become at persuading otherwise intelligent people to embracing even the most extreme forms of self-abnegation.
For my part, I’ll stick with enlightened self-interest as the foundation of my values and self-worth. It isn’t perfect, but at least it isn’t going to lead me into elaborate forms of suicide.
It always amazes me how successful the meme of self-sacrifice has become at persuading otherwise intelligent people to embracing even the most extreme forms of self-abnegation.
It sometimes amazes me (but only when I forget about evolutionary psychology, which easily explains it) how successful the meme of self-interest has become at persuading otherwise intelligent people that their life has more value than another’s. (Say another intelligent person’s, to head off one common rationalisation.)
Edit: This paragraph seems to have been confusing. It is somewhat facetious. To be sincere, it should say ‘[…] persuading otherwise intelligent people that it is unintelligent not to value one’s own life more than another’s.’.
at least it isn’t going to lead me into elaborate forms of suicide
I see no elaborate forms of suicide proposed here. But of course I would sacrifice my life for another’s, in some situations. (Or at least I think that I would; my evolutionary heritage may have more to say about that when the time comes.) Already I have had occasion to sacrifice my safety for another’s, but so far I’m still alive.
Actually, I’m not really an altruist. But I don’t pretend that my selfishness has a rational justification.
It sometimes amazes me (but only when I forget about evolutionary psychology, which easily explains it) how successful the meme of self-interest has become at persuading otherwise intelligent people that their life has more value than another’s. (Say another intelligent person’s, to head off one common rationalisation.)
It sometimes amazes me how often commenters on LessWrong (who really should know better if they’ve read the sequences) commit the mind projection fallacy, e.g. by assuming that “value” is a single-place function (“value(thing)”) instead of a two-place one (“value(thing, to-whom)”).
I meant for the otherwise intelligent person in question, of course. Sorry for the confusion.
I don’t think you understand me. You said:
persuading otherwise intelligent people that their life has more value than another’s
implying that it is wrong to define one person’s life as having more value than another’s. I was pointing out that this is the mind projection fallacy, because things do not have value. They only have value to someone. Thus it is perfectly sane to speak of one’s life as having more value [implied: to one’s self] than another’s.
For me, that time comes when the people that I care about are no longer around and the things that interest me are no longer current.
As long as the internet is around, you will be able to find people with your interests. It doesn’t matter how outdated they are.
Besides, why not sign up for cryonics on the off chance that you will like the future? You can always change your mind. Unless they outlaw suicide, and can effectively stop it, in the future. Which doesn’t seem that unlikely considering we’re assuming they’re willing and able to revive your body just because they can.
Besides, why not sign up for cryonics on the off chance that you will like the future?
Because it costs thousands of dollars (a price which reflects its cost in resources). For me, that’s a large amount of money. I don’t spend it on off chances.
I’m not even interested in being revived from a coma after several years, using only contemporary technology. Certainly I don’t consider it worth the expense [writes TobyBartels].
byrnema agrees in a sibling to this comment, and I agree, too.
ADDED. Sewing-Machine agrees too though he refers to a 20-year coma rather than a coma of several years.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in the light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me.
Yes, I think that this is quite possible. However, the reasons are, as you say below ‘the weaknesses and diseases of old age’, so they’re not really relevant.
I can also easily imagine that I will never want to die. I can easily imagine that, as health care improves ahead of my aging, many of the people who are alive now will live forever, and I will also. That would be fine.
But cryonics is different. Here, you are asking me to take a break of time during which technology advances far beyond what it is today, not to live into the future one day at a time. That does not interest me.
I’m not even interested in being revived from a coma after several years, using only contemporary technology. Certainly I don’t consider it worth the expense. In fact, the main reason that I don’t sign up for DNR now is that I know some people who would suffer if I did not at least outlive them (plus the bother of signing up, although at least it costs nothing).
But I think that your question may be a good one to ask other people who have come to terms with death and thereby find cryonics unappealing. Ask when, after a short or long period of apparent death, they would not want to be revived. For me, that time comes when the people that I care about are no longer around and the things that interest me are no longer current. But I can imagine that some other people would realise that the answer is never and decide to sign up.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me even though I share your conclusion.
I don’t see these as different reasons, but two components of a point of view that hasn’t been fully articulated. However, I share it. To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
This seems somewhat analogous to me to a situation where war or natural disaster destroys your home and kills most of your friends and family but you escape and have the opportunity to start afresh in a new and unfamiliar culture. Now that obviously sounds like a pretty unpleasant situation and vastly less preferable than if the war or disaster never occurred but I would still prefer to survive than to die. I can imagine that some people might feel differently when presented with that choice however.
byrnema agreeing with TobyBartels in wanting always to live another day, but being indifferent to his or her own cryonic suspension and revival:
I, too, want always to live another day (unless my health gets so bad I no longer pay attention to anything or anyone except myself and my pain) but am indifferent to my being cryonically revived.
The way I place these two aspects of my desire into a coherent view is to note that I am useful to the world now (and tomorrow, and next year). In fact, on any given day, the way for me to maximize my usefulness to the world will almost certainly be for me to try to keep on living and to stay as healthy as possible because the source of almost all “wealth” or “usefulness” is human creativity (that is, human intelligence combined with a sincere desire to be useful to the world) and a human’s creativity ends when his or her life ends.
Now if I were to be cryonically suspended and then revived, it is almost certain that an intelligence explosion—more precisely an explosion of engineered intelligences which will be much more useful (for whatever purpose to which they are put) than any human intelligence ever was—has taken place because that is the only thing anyone can think of that would make possible my revival. But I would not be able to help engineered intelligences improved the world: with my relatively puny intelligence, I would just get in their way. Oh, sure, the machines could radically improve my intelligence, making me a “transhuman”, but from where I stand now, this strategy of continuing my usefulness into the post-human era by becoming a transhuman has very low expected utility relative to my simply leaving the post-human era up to the machines (and transhumans other than me) and a much better application of my resources is for me to try to increase the probability of a good intelligence explosion.
In other words, I see the fact that no one has yet figured out how to transform ordinary matter like you would find in a chair or a silcon wafer into an engineered intelligence superior to human intelligence to be “good news” in the sense that it gives me an opportunity to be useful. (And I am relatively indifferent to the fact that I am suffering a lot more than I would be if someone had already figured out how to make engineered intelligences.) For the same reason, if offered the chance to be teleported back in time 2000 years, I would take it (even if I could not take any of the wealth, tech or knowledge of the past 2000 years with me) because that would increase the expected usefulness of my intelligence and of my simple human life.
So that is why I am indifferent to my own cryonic suspension and revival:: even though I will probably be just as intelligent and just as interested in improving the world after my revival as I am now, it will not matter the way it matters now because my relative ability to improve the world will be less.
In other words, the way I form my wants into a coherent “explanation” or “system” is to say that I am interested in living as long as practical before the intelligence explosion, but approximately indifferent to continuing my life after it. And from that indiffernnce flow my indifference to my being cryonically suspended and revived.
Another thing: I have found that I able to take coherent actions over extended periods of time to keep on living, but I have not been able to make any non-neglible effort towards my being successfully suspended and revived. In other words, there is a sense in which the values that I have are an empirical matter not under my control. If mattnewport or someone else is able to make a reply to this comment which points out an inconsistency in the “explantion” or “system” I have given above, well, I might be chagrined (because most people do not like to admit in public to holding inconsistent goals or values) but I will probably continue to be unable to motivate myself to take effortful actions over a long period of time to maximize the probability of my successful suspension and revival.
In other words, if as a result of a debate here on Less Wrong (and out of a desire to appear consistent and stable in my goals and values, out of a desire to be accepted by the cool and wonderful people of SIAI or out of some other desire) I were to announce that I have “changed my mind” and I now believe that my being suspended and revived is the right thing for me to choose, I do not see why anyone would care all that much. My desire for my actions and choices to be consistent with my professed values will in that case probably cause me to sign up for cryonics if the cost of signing up is low, but to be honest with you, it is extremely unlikely that anything would cause me to work hard over a long period of time to ensure my suspension and revival because I just do not care and I doubt severely that I could make myself care even if I wanted to make myself care. (There are many goals and values I have unsuccessfully tried to make myself care about.)
One more thing: a lot of people (mostly those who are “motivated” and consequently able to take effortful actions over a long period of time about cryonics) will probably take what I just said as significant evidence that I cannot be trusted. I am not big on networking, so this is based on observations (from public internet messages and from face-to-face conversations) of only a handful of people, so it could be wrong.
And in turn for many years I assumed (without any good argument to back it up) signing up for cryonics (unless it is done for instrumental reasons like Eliezer did it) was significant evidence that the person should not be trusted. Yes: I took the fact that someone shared my indifferent to keeping on living for its own sake rather for the sake of keeping on being useful to the world as significant evidence that I could trust that person. I have recently abandoned that opinion (in favor of vague agnosticism) because most (all) of the extremely strong rationalists I know who had taken the trouble to inform themselves about cryonics the intelligence explosion had the opposite opinion (and because my having been confronted with that fact caused me to notice that I had no real basis for my opinion). But note people with the opposite opinion probably will not convert to agnosticism like me because among extremely strong rationalists with the necessary information to form an opinion, they are in the majority (even though of course they are in a small minority of the general population). So, maybe I am being too cautionary, but I would tend to advise Toby and byrnema, if they are planning on applying for a visiting fellowship at SIAI, FHI or such and want to be very cautious, to refrain from continuing to post about their values as they apply to cryonics.
Regarding being useful, this is something I strongly identify with. I am not a highly self-interest valuing person, though I see self-interest as itself being a useful value (in the right context). I find that I am more motivated to sign up for cryonics when I look at it as an example to set for others than when I look at it as a way to save my own skin. I am essentially more motivated to support than (directly) to adopt cryonics.
Presumably, a well written CEV optimizer would see our desire to be useful and put us in situations where we can be useful. However, I think it’s worth noting that there is quite a bit of wriggle room between the invention of fooming AGI and the development of reversal mechanisms for cryonic suspension. Reversal for cryonic suspension could end up being something that is specifically and painstakingly developed over the course of several decades by humans with normal levels of intelligence.
So far as trustworthiness is concerned, iterated prisoner’s dilemna suggests that an expectation of ongoing transactions leads to trustworthiness. So, signing up for cryonics implies being slightly more trustworthy than not.
The prisoner’s dilemma is only one part of the human landscape, and no one has argued that it will prove the most decisive part. Well, Robert Wright comes close by having chosen to write an entire book on the iterated prisoner’s dilemma, but that book’s analysis of the causes of increasing human wealth completely neglects the wealth-increasing potential of an explosion of engineered intelligence.
I can make the counterargument that the more resources a person needs to fulfill his desires, the more likely the person is to impose harm on a fellow human being to acquire those resources and that for example I have little to fear from the proverbial Buddhist mystic who is perfectly happy to sit under a tree day after day while his body decays right in front of him. My simple counterargument implies that everything else being equal, I have more to fear from the one who desires cryonic suspension and revival than the one who does not.
But the more important point is that human trustworthiness is a very tricky subject. For example, your simple argument fails to explain why old people and others who know they are near the end of their lives are less likely to steal, defraud or cheat than young healthy people are.
I do not claim to know which of the groups under discussion is more trustworthy. (My only reason for advancing my counterargument was to reduce confidence in the insightfulness or your argument.) I am just passing along my tentative belief obtained from discussions with and reading of other singularitarians and cryonicists and obtained from observation of the evolution of my own beliefs about human trustworthiness that people tend to think they know. They think it is the same group they belong to.
I was answering the question that Martin asked. I stand by my old reason for not signing up.
Actually, it’s not so much that I have a reason for not signing up, as that I have no reason for signing up. So in my original post, I addressed what seemed to be the obvious reason for signing up: that one would hold long life of value in itself, which I don’t. Then Martin suggested another reason (that on any given day, I would want to live another day), so I addressed that one.
If you were signing up for a health-insurance program which included coverage for cryonics by default, along with other available treatments for severe injuries, would you opt out of that part of the coverage, and ask to be embalmed or cremated rather than frozen? What if it cost extra to do so?
Probably not. I wouldn’t seek out such a plan, and the way things are now, such a plan would cost far more than other plans, so I wouldn’t buy it. But things may be different in the future.
What I mean is, if the plan which otherwise provided all the benefits you wanted for the least cost also included cryonics (as some sort of silly package deal, due to market forces otherwise beyond your understanding) how much would it be worth to you to have the opportunity to randomly get hit by a bus someday and not wake up at all?
Not much, and possibly a negative amount (meaning that I’d prefer the cyronics coverage); I’ll have to think about it when the time comes.
Really, a lot depends on whether my relatives and friends have also signed up for cyronics. If the situation you describe ever exists, it will probably only be when cryonics has become normal, in which case it’s much more likely that I will want it for myself, thanks to having friends waiting for me in the future.
Heck, getting involved in Less Wrong meet-ups might be enough! I find that hard to predict (and unlikely to be tested soon, given where I live and how full my social life is now).
Originally you said:
And then when Martin asked if you foresee a day in the future where you would prefer to die than live another day you said:
Which suggests that you either do now desire to live forever or are at least comfortable with the idea of doing so. It looked to me like you changed your mind on the question of whether you would actually want to live forever after all but maybe this was a misinterpretation of your position.
You almost seem to be viewing the question of whether you value a long life as fundamentally different to the question of whether you would want to continue living on any given future day. This seems bizarre to me.
That’s just part of my history. I carefully put it in the past tense.
Then I wrote a paragraph saying that I no longer had any particular opinion as to how long I should live, that I would just see it day by day. Actually, the paragraph covered more than that, including how I transitioned from a feeling that a century was about right to the idea that it was silly to judge such things. But then, on proofreading my original post, I cut that paragraph. So now my original post reads
The transition from past tense to present tense is not very clear there, for which I apologise.
But currently I have no particular desire about my length of life. I could make a prediction, based on what is likely to happen in the future and what I am likely to want, as to whether I will always want to live a bit longer, and if I predict that I will, then I could say now that I want to live forever. But signing up for cryonics now would not help me achieve any of the wishes that I anticipate having in the future, because that’s not how I’ll want to live longer. (And if this prediction is wrong, then I can sign up later.)
In that case, taboo wanting to live forever. For some people, that seems to be a value for its own sake; I think that it was for me once. But now I’m rational like you, and I only want to live forever if I’ll forever want to live. So the only question is whether I want, assuming that I get hit by a bus today, to wake up a hundred years later. And I don’t particularly.
But once upon a time, I really wanted to live forever, because I liked the idea of living forever. In holding this idea, wasn’t thinking about whether some day I would like to die; it was, if not a terminal value in its own right, something close to that. Furthermore, death was scary and unknown, and I was taught about Heaven and Hell; even after I realised that this was a fairy tale, I harbored an idea that death was bad in and of itself. There are probably good evolutionary reasons why somebody would feel this way.
Once I was cured of all that, however, anything that might have made cryonics inviting was gone. That was the point of my original post.
This is a really pithy and compelling way of putting this. I definitely have, at a gut level, a desire to wake up tomorrow. But I don’t even have at that same gut level a desire to come out of a coma 20 years from now. Cryonics presses my survival instinct even more gently.
(Edit: I see that Bartels made the coma analogy a few comments up. Excuse the redundancy, or take it for emphasis.)
Thank you for answering.
I find it strange how society at large frowns upon cryo, while also not doing a serious effort to prolong the healthy lifespan (wallbangerific). But on the other side frowns upon suicide.
I also usually avoid the topic due to its iffyness, and i am not signed up myself yet, so its basically armchairing anyway.
I think Matt got a point. And of course if you go into the search for your real reasons all kinds of bad things might happen for you.
But what jumped me, was that a long lifespan is fine, while a long lifespan with a coma/pause in the middle is not. I dont get that.
Of course cryo people would love to take their loved ones with them, and are horrified when they ignore the chance.
I agree with that! I’m interested in the work by Aubrey de Grey. It’s not useful to me now, but I predict that someday it will be.
Well, I don’t suppose that there are many people who feel that way. If you can get across the idea that cryonics is a way of turning one’s death into a very long coma, then that may help make it more attractive.
But I get up in the morning because there are things that I left unfinished the day before. By the time that I am revived from cryonics, they will all be finished.
If my loved ones signed up for cryonics, that would be reason enough for me.
Yes. Exploring how people would feel about a very long coma could be a good way of exploring how they feel about cryonics-minus-the-creep-factor. In other words, if they didn’t have the psychological obstacles centered around cryonics, how would they really feel about it?
What a horrendous case of prisoner’s dilemma...
It is a horrendous case of a sub-optimal equilibrium in a coordination game. You know, one of the examples of game theory that isn’t the @#%@ Prisoner’s Dilemma.
I’ve been spending so much of my social time among people who treat swearing with complete nonchalance that I had forgotten how much power even a censored swearword can have in a setting where it is never used.
Yes, I’m wrong about this being prisoner’s dilemma. One side defecting (dieing) against other cooperating (cryopreserving) won’t make first side better off and second one worse off.
So it’s just insufficient communication/coordination.
We could also consider a game that was perhaps a step closer to the original. Leave cooperation and defection (and their PD laden connotations) behind and just consider a pair who would sincerely rather die than live on without the other. This is plausible even without trawling Shakespeare for fictional examples. Here on lesswrong I have seen the prospect of living on via cryopreservation without a friend joining them described as ‘extrovert hell’. Then, as you say, a ‘cremation’ equilibrium would be the result of insufficient communication/coordination. A high time and or emotional cost to transitioning preferences would contribute to that kind of undesirable outcome.
Incidentally, if we were to consider pair-cryo selection as an actual PD the most obvious scenario seems to be one in which both parties are overwhelmingly spiteful. Cryopreservation is the defect option. Where life is preferred but it is far more important to ensure that the other dies.
Not really. If any of my loved ones were at all interested in cryonics, then we could discuss it and choose to sign up together. In the Prisoner’s Dilemma, you don’t know what your counterpart is doing.
I am starting to wonder if there needs to be more of a recognized social niche for cryo supporters who aren’t signed up themselves (or whose arrangements simply are not public).
My niche is young people with little money living in Europe. To sign up I need to a) make money—which will happen soon b) figure out how the necessary arrangements for germany regarding transport, legal and what not, c) get the paperwork. After the writing the earlier comment I got another reality-shock about how stupid it is that society at large doesnt jump on the longevity issue. Way worse than smoking....
Me neither. I would like to write a clause that I am awoken only if a living relative feels like they need me. This may seem like a cheat, because it’s very unlikely that a child or a grandchild won’t want to revive me, but the truth is, I would be content to leave it in their hands. There is no value to my life beyond my immediate network of connections. If I am awoken in 200 years to a world that doesn’t know me, I might as well be someone else, and I don’t mind being someone else. There’s no difference between my experience of ‘I’ and the one that will develop in some number of years in a newly born baby.
There is no value to my life beyond my immediate network of connections.
That is the saddest statement I have read this whole week.
Indeed. It always amazes me how successful the meme of self-sacrifice has become at persuading otherwise intelligent people to embracing even the most extreme forms of self-abnegation.
For my part, I’ll stick with enlightened self-interest as the foundation of my values and self-worth. It isn’t perfect, but at least it isn’t going to lead me into elaborate forms of suicide.
It sometimes amazes me (but only when I forget about evolutionary psychology, which easily explains it) how successful the meme of self-interest has become at persuading otherwise intelligent people that their life has more value than another’s. (Say another intelligent person’s, to head off one common rationalisation.)
Edit: This paragraph seems to have been confusing. It is somewhat facetious. To be sincere, it should say ‘[…] persuading otherwise intelligent people that it is unintelligent not to value one’s own life more than another’s.’.
I see no elaborate forms of suicide proposed here. But of course I would sacrifice my life for another’s, in some situations. (Or at least I think that I would; my evolutionary heritage may have more to say about that when the time comes.) Already I have had occasion to sacrifice my safety for another’s, but so far I’m still alive.
Actually, I’m not really an altruist. But I don’t pretend that my selfishness has a rational justification.
It sometimes amazes me how often commenters on LessWrong (who really should know better if they’ve read the sequences) commit the mind projection fallacy, e.g. by assuming that “value” is a single-place function (“value(thing)”) instead of a two-place one (“value(thing, to-whom)”).
I meant for the otherwise intelligent person in question, of course. Sorry for the confusion.
By the way, I interpreted ewbrownv’s comment in precisely the same vein.
I don’t think you understand me. You said:
implying that it is wrong to define one person’s life as having more value than another’s. I was pointing out that this is the mind projection fallacy, because things do not have value. They only have value to someone. Thus it is perfectly sane to speak of one’s life as having more value [implied: to one’s self] than another’s.
Yes, of course it is!
And it is equally sane to speak of one’s life as only having value in its relation to others.
My comment was a reply to the comment to which it was a reply; it does not make sense out of context.
Edit: I have edited the comment in question to be more clear.
What if a living relative just misses you and would like to have you around?
As long as the internet is around, you will be able to find people with your interests. It doesn’t matter how outdated they are.
Besides, why not sign up for cryonics on the off chance that you will like the future? You can always change your mind. Unless they outlaw suicide, and can effectively stop it, in the future. Which doesn’t seem that unlikely considering we’re assuming they’re willing and able to revive your body just because they can.
Because it costs thousands of dollars (a price which reflects its cost in resources). For me, that’s a large amount of money. I don’t spend it on off chances.
byrnema agrees in a sibling to this comment, and I agree, too.
ADDED. Sewing-Machine agrees too though he refers to a 20-year coma rather than a coma of several years.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in the light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me.