Female Test Subject—Convince Me To Get Cryo
I heard that women are difficult to convince when it comes to signing up for cryo. In mentioning cryonics to a dying person, there seems to be a consensus that it’s not going to happen. I encountered a post: Years saved: Cryonics vs VillageReach, which addressed my main objection (that the amount of money spent on cryo may be better spent on saving starving children, especially considering that you could save multiple children for that amount of money with high probability whereas you save only one life with low probability by paying for cryo). Now I’m open to being persuaded.
My first instinct was to go read a lot about cryo, but it dawned on me that there are a lot of people here who will want to convince family members, some of them female, to sign up—and these people may appreciate the opportunity to practice on somebody. It has been argued that “Brilliant and creative minds have explored the argument territory quite thoroughly.” but if we already know all of the objections and have working rebuttals for each, why is it still thought of as extra difficult to get through to women? If there were a solution to this, it would not be seen as difficult. There must be something that pro-cryo people need for persuading women that they either haven’t figured out or aren’t good enough at yet.
So, I decided to offer myself for experiments in attempting to convince a woman to sign up for cryo and took a poll in an open thread to see whether there was interest. I don’t claim to be perfectly representative of the female population, but I assume that I will have at least some objections in common with them and that persuading me would still be good practice for anyone planning to convince family members in the future. Having a study on persuading women would be more scientific but how do you come up with hypotheses to test for such a study if you have no actual experience persuading women?
So, here is your opportunity to try whatever methods of persuasion you feel like with no guilt, explore my full list of objections without worrying about it being socially awkward, (I will even share cached religious thoughts, as annoyed as I am that I still have them.), and I will document as many of my impressions and objections as I can before I forget them.
I am putting each objection / impression into a new comment for organization. Also, I have decided to avoid reading anything further on cryo, until/unless it is suggested by one of my persuaders.
Well, have fun getting inside my head.
I was surprised to see the most relevant objection of the vast majority of people not mentioned. It is conspicuously absent, in fact. Social norms.
The social norms against cyro are so strong that almost no one even remotely considers it. This is almost everyone’s true rejection.
When people say it’s extra-hard to convince women, I think they’re misattributing the source of difficulty. It’s very hard to find people who are so blind to (or resistant to) social norms (take your pick of connotation :) ) that they’re willing to consider the merits of cryo. For whatever reason it seems easier to find males who are so blinded/fortified than females. I would wager that it’s the same reason that the gender distribution of LW skews very male.
Perhaps the most effective argument to make to get most people to sign up would be “This is why you may safely ignore social conventions in this case.” With little/no attention being given to the merits of cryo, and almost all the effort being put into convincing the subject that the social costs will be minimal.
Ooh good observation. It can be so much harder to notice things that aren’t there.
The answer to why I didn’t make a social norm objection is simple: I don’t have to tell anyone that won’t understand. It’s not like anyone is going to publish my name in the newspaper.
Interesting that they don’t appear to realize this. Maybe the difference is that if you’re talking to people in a non-anonymous context where others are overhearing, they will appear wary of cryo for social reasons, but I can’t help but wonder if they then go away and think about it on their own, privately considering it’s merits. After all, this is life or death, right?
Maybe the only thing that you have to do to overcome this is tell people it can be done privately (I’m only assuming that it can be, can it?) and to present cryo to them when nobody else is around.
Or you could open the cryo discussion with something to the effect of “If everyone else were jumping off a cliff, would you do it just because they were?” If no, which is likely, then: “If there was something that could keep you from dying but it wasn’t popular, would you say it was jumping off a cliff with them if you would not even consider it?” If yes, then: “Cryo could stop you from dying. It isn’t popular, but would you consider it anyway?”
That pits an even more socially unacceptable thing, being such a sheep that you die, against something that can’t possibly be as unacceptable since it doesn’t require you to knowingly make a decision which leads to your own death. Unless survivor’s guilt is prevalent, in which case the irrational notion “But I shouldn’t kill everyone else by surviving!” trumps “I can’t jump off a cliff like an idiot.”
Survivor’s guilt (resolved objection):
Viliam Bur suggested survivor’s guilt, and I realized that I was experiencing survivor’s guilt while imagining getting cryo.
I wonder if women experience stronger survivor’s guilt than than men. Testosterone supposedly makes one more selfish. Women are known for altruistic acts (many of which are pathological, like the phenomenon where women will often stay with an abusive partner trying to love him into changing), possibly because of some differences with oxytocin. I bet there’s a connection here between hormonal differences and survivor’s guilt that might explain the extra difficulty in convincing women.
Seeing that survivor’s guilt didn’t seem rational, I became curious about it and introspected for a moment. It seems to be resolved. I documented my thinking process:
I have thought of a question to ask myself that may get rid of it:
“Imagine that there are three people who I really want to see live. By random chance, something happens outside their control and two of them die but one of them lives. Do I feel happy that the one person lived? Or do I feel like they should die?”
My feeling is that they definitely should not die.
Now, I also feel compelled to try this:
“Imagine three people I don’t like, but who I don’t think deserve to die. Same scenario, one lives.”
My feeling is that I prefer they do not die.
Now I’m asking “If it was more fair to the other two, would I have had them die along with them?”
No, I’d have tried to save them, and if the other two wanted to see the person die for “fairness” that’s just crazy.
Okay, so now I’m asking myself:
If I was in that situation where two of the same people died but I survived by chance, would I feel it was crazy to think it was unfair for me to survive?
Yes, that is laughable now.
Something in me feels compelled to ask: “Were you better than those two other people?”
My answer is: “Who chose whether they died?”
Ah! Now this is separated. I have separated myself from the cause of their death. I had to see that I was not at fault for this.
The obvious question then is “What is the cause of most people dying except me who got cryo?”
Answer: All the causes. I cant stop them all. But I can tell more people about cryo and I can try to stop my own death, and this is good. That’s the best that I can do.
Now, I have this warm feeling like my guilt is alleviated, like saving my own life isn’t an affront to them, but something they would think was good—just as I thought it was good that one person survived when two died.
Okay, I think I figured out how to hack survivor’s guilt, at least, as it applies to me. I will update here if the guilty feeling returns.
Now onto my other objections… (:
If I were to make a prediction for an experiment, I would guess no, because men are conditioned to see themselves as more expendable. I’m guessing that the same norms which led to more women in steerage class making it off the Titanic alive than men in first class would lead to men having stronger survivor’s guilt than women.
The Titanic was an exception. Slate.com had a link to the study itself (I think).
That men feel expendable is an interesting idea, but that sounds like more of a cultural pressure having to do with the military or women being capable of pregnancy than an instinct. The hormonal differences, on the other hand, are unavoidable and internal. I wonder which is stronger and whether anyone has done research on whether women are more self-sacrificing. (Not seeing anything from my searches.)
It may or may not be instinctual, but then, there are probably some rather strong selective forces which have encouraged men to be more cavalier with their lives than women. Even if it’s cultural, it’s a cultural value that’s reinforced quite consistently.
It’s reinforced by a lot of talk. Historically, men do not save women in shipwreck situations. This is information that would be pretty surprising based on your previous beliefs. Shouldn’t it change your mind?
It’s somewhat surprising, but then, men can still be significantly more prone than women to consider themselves expendable, and still outsurvive women in shipwrecks if both genders tend to be non-self-sacrificing enough for the situations to devolve to “every man for himself.” For purely physical reasons, men are more likely to make it out of a panicked crowd alive. I’m a bit surprised that the Titanic scenario was as exceptional as it was, but I would not necessarily have predicted that relative rates of self sacrifice would dominate survival rates.
If a reliable study were to find that women are as or more likely to risk or sacrifice their lives to save non-progeny compared to men, it would certainly be sufficient to change my mind.
I’d say the shipwreck data reinforces it: in the circumstances where heroism is least observable and where death is most likely (reducing the potential reward and increasing the incurred risk), we see less peacocking. If the relationship ran the inverse direction—the more the reward and the less the risk, the less risk-taking—that’d be pretty strange and hard to reconcile with the Baumeister paradigm.
“Female” is probably not the most relevant descriptor here. “Nerd on Lesswrong Test Subject” would perhaps be more representative. Or “Epiphany Test Subject”. If ‘female’ comes in to it it should be qualified as “Female Lesswrong Participant”, with that second part conveying more information relative to the population at large than your sex.
When I realized cryo is real (documentation): About a year ago, I went on a date with someone who had signed up for cryo. I remember asking him whether it was expensive, and he told me that his life insurance paid for it. My feeling was “Oh, you can actually do that? I had no idea.”—and it felt weird because it seemed strange to believe that freezing yourself is going to save your life (I didn’t think technology was that far along yet), but I’m OK with entertaining weird ideas, so I was pretty neutral. I thought about whether I should do it, but I wasn’t in a financial position to take on new bills at the time, so I stored that for later.
My guess is he said (or meant to say) “life insurance” rather than “health insurance”. I don’t think there’s health insurance that covers cryonics. The idea that freezing yourself will save your life is indeed a weird one that should be carefully researched before you adopt that position. As you probably realize by now, cryonics involves (an attempt at) vitrification of the brain, which means that unlike normal freezing, ice crystals are (at least in ideal cases) prevented from forming.
Highly concentrated cryoprotectants must currently be used, and this does significant damage which needs to be repaired later. Thus it’s a conditional bet about scientific unknowns—if technology reaches a certain level, having my brain vitrified may turn out to save it well enough that science can restore me to a healthy existence (which may or may not be all digital). Most cryonics advocates do not take the hard line of belief that it definitely will save their life, but that it presents a good enough chance to be worth it given the sum of current scientific knowledge.
In my opinion, the chance of it working must exceed something in the range of 1% to be reasonable and not considered quackery. My reasoning is that the cost is in the $50k range ($28k-$150k) whereas actuaries budget somewhere in the range of $5M towards saving human lives in matters of public safety. Spending $50k on a procedure with .01% chance of working is only for rich egoists and/or people who assign a much higher value to the longevity and self-improvement opportunities of the future. Go too much lower than that and you end up with a “pascal’s wager” kind of scenario, which could conceivably justify all kinds of quackery. In any case I think it is safe to say that if the chance is greater than 1%, it is something that everyone should have access to, and should ideally be covered by medical insurance.
The chance of it working seems to be much higher than that, in the average person’s mind. But then, average people often accept all kinds of weird ideas so that’s probably not the best metric available to us. How scientists (especially those with relevant expertise) feel about it is the major question. I would be curious as to what a survey of scientists with relevant expertise would turn up. What is disturbing to me (and what turned me from a fairly neutral party into something of an activist) is how unimportant the topic seems to be treated by both the scientific community and the nonscientific world. This should be hotly debated, not dismissed out of hand.
I suspect social causes are a dominating one, and I suspect women on average may have a better grasp on the social causes than men on average. So my plea to females (since that’s the point of this thread, coming up with more female-appealing arguments) would be to at least try and understand this from the perspective of advocates and why we are passionately in favor of it. Read Kim Suozzi’s description of her reasoning—it is a logical step to take when you don’t feel you are done living and think science is likely to conquer the problems involved.
As to the creepiness of freezing people, well, while a negative visceral first reaction is understandable, there’s nothing about it that is any creepier than what emergency and surgical medicine already entails, and more science is (usually) a good thing for humanity. We’ve been shipping organs on ice and transplanting them for decades, and we’ve reanimated stopped-heart “dead” patients for even longer.
Another reason might be that it seems like “mad science”. Mad science as seen in fiction is ambitious (which cryonics also is) but it is also cruel and morally indifferent. This is where the chance of it working is important, because if there is a sufficiently good chance of it working, cryonics becomes something that compassionate people are motivated by, not just egoists.
However even if the chances are too low for compassionate motives to come into play much, there does not appear to be any reason to regard it as cruel, since patients are completely unconscious (some of the drugs perfused in the legally dead patient are strong anesthetics) before they are cooled. And it is something patients choose for themselves rather than having it forced upon them.
Yes, he said life insurance. Typo, sorry.
I don’t know if there’s any way of telling what the real probability of revival is. Do you know of a good source on this?
Well I got that part right at least. (:
It’s true that I don’t know why you’re passionately in favor of it. I know that Eliezer is passionately in favor because he lost his brother. That makes sense to me. Considering my concerns about waking up as a horror, and the fact that I don’t have any family members that are signed up for cryo who will miss a chance at interacting with me in the future if I don’t sign up, that simply doesn’t apply in my case.
I don’t know where that is. Do you?
It’s not creepy to me anymore. It was depicted as creepy in the cartoon, though—there were all these rows of really ugly alien looking bodies and some ominous music was playing and the children were theorizing about what they were and they realized they were dead.
Being frozen isn’t any creepier than being buried. My body has to go somewhere after it dies. Actually, I think this is less creepy—it’s a lot cleaner. No worms or anything.
I’m probably unusually accepting here. I have had a lot of fun doing things like touring a particle accelerator and hanging out with “mad scientists” in labs. I love it.
I don’t know how I got this way but I’m thinking it has to do with realizing that the “mad scientists” come up with awesome stuff sometimes.
Trying to live forever is associated with evil (religious cached thought):
I’m not religious, but was raised Christian. Annoying as this is, I still find religious cached thoughts sometimes. I don’t want to keep them—I’m sharing for the sake of documenting all the thoughts that are being triggered while I make my decision. Thinking about signing up for cryo triggered this:
My cached thought is associating living forever with being tempted by the devil, and seeing it as a thing that only sinful people would do.
I realize that I would not be guaranteed everlasting life. Even if I was revived, I expect it would be for a much shorter time than “forever”. That wouldn’t change the fact that I’m mortal or circumvent the threat of hell. I’m not sure where the sense of defiance comes from. I suppose it would defy the current way of things but expecting life forms to just shut up and die is silly.
I don’t see why extending your life would have to qualify as sinful. It just makes sense.
Thanks for posting that—I wasn’t raised Christian, and that objection never would have occurred to me. Do you have a feeling for whether it might be a common Christian objection? The Christian objection I’ve heard is that great longevity means putting off going to Heaven. I’ve never heard a Christian say that great longevity increases the odds of repenting and avoiding Hell.
My exposure to anti-longevity/immortality thoughts are from science fiction and fantasy, which doesn’t just have a wide streak of “you’d need evil methods” (see also Bug Jack Barron, in which it takes killing poor children for something from their glands), but a very strong streak of “if you were immortal, you wouldn’t like it”. You’d be bored or you’d go mad. I think it’s sour grapes.
During a long life a Christian may repent many times, and sin again many times. Whether they go to Heaven or Hell depends on when they die. Suicide is a mortal sin because otherwise they’d kill themselves after repenting. So they do the next best thing, and repent when they think they’re going to die. Confession and absolution on a deathbed are standard. Conversion and baptism on a deathbed are known to happen.
Common Christian Objections: (Guesses, as I am no longer a Christian) and rebuttals (Within the Christian religious framework, as it’s not always feasible to convince them to be Atheists).
1.) You’re trying to get something that’s forbidden. (Life is important, so God must control it, if you were supposed to have more, you would already have more. Therefore trying to get more should be viewed as bucking a limitation.)
Rebuttal: If you attribute other medical breakthroughs to God, how do we know God didn’t give this to us, too?
2.) Only God should decide when you die. (He forbids you from living longer except at his discretion.)
Rebuttal: Why should I believe that a loving God expects me to just shut up and die?
3.) You’re making a deal with the devil. (Because only God should decide.)
Rebuttal: Nobody asked me for my soul or to do anything evil to sign up for cryo. The ten commandments don’t tell me not to. In fact “You shall not murder.” may be interpreted as an obligation to continue your own life wherever possible, otherwise you’re knowingly choosing to die when it isn’t necessary, thereby “murdering” yourself. I see no evidence that this is temptation by the devil.
4.) You’re tinkering with the sacred.
Rebuttal: If life is sacred, and saving lives is an option, isn’t it worse to fail to do everything you can to save lives, even if your attempts are somewhere between not perfect and horribly incompetent at first?
That’s a really good argument. If Christians want Atheists to come around, shouldn’t they hope we live longer so we have a better chance of finding some reason to believe in God? I’m not religious, and I really doubt any Atheists will “come around”, but I think this would work as an argument.
+1 Karma
You might say living forever on earth is associated with being tempted by the devil. But the fundamental (it seems to me) temptation offered by Christians in trying to sign up new members and keep old ones is the promise of eternal life in heaven. Indeed, many retail christian outlets declare you will get an eternal life no matter what you do, and the reason to sign up is so that your eternal life isn’t an eternity of torture.
Just interested in pointing out that “eternal life” is not something Christians typically run from.
Funny, I was aware of this meme in Western culture but I never associated it with religion. (I was raised mostly secular, modulo a little residual Catholicism in my family.) Immortality often shows up as a goal in media, but almost exclusively as a villainous one: heroes accept their fate, villains fight against it. Often the methods of obtaining immortality lean towards the cartoonishly evil (the mythical version of Elizabeth Bathory bathing in virgins’ blood; Lord Voldemort’s horcruces), but just as often they’re fairly benign and the pursuit itself is seen as hubristic and therefore evil. At best, a hero (Gilgamesh, say) will pursue it for a while before learning better, but this is actually pretty rare.
This seems to tie into another thought of mine about how villains and heroes get constructed in our culture, but that’d be a bit of a sideline in this context. I don’t think I’m familiar with the construction of immortality in a Christian context, though, aside from incredibly esoteric stuff like medieval alchemy; can you tell me more?
Yeah, you know what, why is immortality portrayed as evil in all of these different places? There must be some specific spot in the bible, but I can’t recall it. Maybe it isn’t even from the bible. Now I’m really curious to find out exactly where this cultural association between immortality and evil came from...
The closest the Bible gets, as far as I remember, is the bit in Genesis about the Tree of Life, and that’s pretty ambiguous. It’s been a while since I’ve read it, though.
I’m not actually sure, but I think this is mainly a hubris thing. For whatever reason, there’s a fairly well-defined set of activities in our culture that are thought of as outside the proper domain of humanity; this might have gotten its start in a religious context, but it’s certainly not limited to that anymore. (Consider “frankenfoods”.) Seeking immortality’s on that list, along with playing with the building blocks of life or, worse, creating new life; doing any of these things seems to be considered usurping the role of God or nature, and therefore blasphemous or at least very close to it. This is, of course, nothing new.
Where we get that list from is another question. I don’t think it’s purely Christian; cautionary tales about immortality go back at least to the Epic of Gilgamesh, although as far as mythological treatments go I think the Cumaean Sibyl’s has more punch.
I never read Gilgamesh as a story against immortality. On the contrary, it is a tragedy that Gilgamesh loses the flower of immortality that he has brought back. The gods in this story are enemies who keep immortality for themselves.
Lol somebody ate an apple once, now we’re not allowed to live forever.
Even if that was real, I don’t see cryonics as a means of living forever. Forever is a long time. There’s no guarantee of that.
Now that’s interesting. I wonder if that might actually be more of an instinct to avoid screwing up important things, or just common sense, than something that’s religious. Even if it has been codified in religion, might it have originally stemmed from a sense of not wanting to screw up something important. It’s true that we are flawed and that whenever we attempt to do something ambitious, there is a risk of horribly screwing things up. Eg: communism. There can be unintended side-effects. Eg. X-ray technicians used to x-ray their hands every morning to make sure the machine was warmed up. You can imagine the horror they encountered years later...
I think we’re right to have a sense of trepidation about messing with life and death. It’s a big deal, and we really could gravely screw something up, there really could be unexpected consequences.
New objection: Unexpected Consequences
Living forever isn’t quite impossible. If we ever develop acausal computing, or a way to beat the first law of thermodynamics (AND the universe turns out to be spatially infinite), then it’s possible that a sufficiently powerful mind could construct a mathematical system containing representations of all our minds that it could formally prove would keep us existent and value-fulfilled forever, and then just… run it.
Not very likely, though. In the mean time, more life is definitely better than less.
Let me ask you this. Somebody makes a copy of your mind. They turn it on. Do you see what it sees? Someone touches the new instance of you. Do you feel it?
When you die, do you inhabit it? Or are you dead?
Depends on your definition of ‘you.’ Mine are pretty broad. The way I see it, my only causal link to myself of yesterday is that I remember being him. I can’t prove we’re made of the same matter. Under quantum mechanics, that isn’t even a coherent concept. So, if I believe that I didn’t die in the night, then I must accept that that’s a form of survival.
Uploaded copies of you are still ‘you’ in the sense that the you of tomorrow is you. I can talk about myself tomorrow, and believe that he’s me (and his existence guarantees my survival), even though if he were teleported back in time to now, we would not share a single thread of conscious experience. I can also consider different possibilities tomorrow. I could go to class, or I could go to the store. Both of those hypothetical people are still me, but they are not quite exactly each other.
So, to make a long story short, yes: if an adequately detailed model is made of my brain, then I consider that to be survival. I don’t want bad things to happen to future me’s.
Actually trying to live forever (“saving your soul”) is the central stated point of religions such as Christianity and Islam.
Religious opposition to cryonics could stem from the fact that cryonics is preceived (correctly, IMHO) as a competing religion. Note that there is no strong religious opposition to most other procedures that promise a lifespan extension.
Huh. That is such a simplistic way of viewing religion. I think you’re right in a sense—that it may very well threaten religions by providing an alternative for a key reason people become religious. However, I think most religious people I know (I’m not one so I am guessing at their reasoning) would object to this, saying that there is a lot more to religion than that, and that if the person is in it only to go to heaven, they’re being superficial and not really “getting” it. For that reason, I think they’d say that they do not categorize their religion as a religion because it promises to save your soul, and they’d probably also not categorize cryonics that way either.
Indeed there is much more to religion than saving your soul, but that’s a major point in Christian and Muslim preaching.
The difference between saving the soul and extending life is that saving the soul means preserving it to live in a particular way (i.e. the imago Dei). Extending life is neutral with regard to how you live it.
Brain upload? Imago FAI? Come on, it’s the same sort of stuff, just with supernatural miracles replaced by technological ones.
But the cryo people aren’t prescriptive about what imago FAI looks like, that’s the point. They’ll give you more life, but they won’t tell you how to live it. Whereas religion doesn’t change your material circumstances but is very emphatic about how you should live with them.
“Imago FAI” is a serendipitous coinage. It sounds like what I had in mind here, when I talked about the mature form of a friendly “AI” being like a ubiquitous meme rather than a great brain in space. If a civilization has widely available knowledge and technology that’s dangerous (because it can make WMDs or UFAIs), then any “intelligence” with access to dangerous power, needs to possess the traits we would call “friendly”, if they were found in a developing AI. Or at least, empowered elements of the civilization must not have the potential or the tendency to start overturning the core values of the civilization (values which need not be friendly by human standards, for this to be a condition of the civilization’s stability and survival). It implies that access to technological power in such a civilization must come at the price of consenting to whatever form of mental monitoring and debugging is employed to enforce the analogue of friendliness.
Cryonics itself makes no moral prescriptions. You can consider it as a type of burial ritual.
But rituals are not performed in isolation, they are performed in the context of religions (or religious-like ideologies, if you prefer) that do make moral prescriptions.
Cryonics typically comes in the transhumanist/singularitarian ideological package, which has a moral content.
This is speculation: I’m not a Christian.
In Christianity, death brings the judgment of God who sends you to heaven or hell (or purgatory).
If you expect heaven, you don’t want to put off death. Suicide is a sin but as long as you don’t see non-cryonics as willful suicide, you would want to die early to get to heaven early.
If you expect hell, then you think you’ve sinned mortally. Most brands of Christianity allow for redemption by various means. If you think you’re a sinner, trying to put off death means trying to avoid the judgement of God, which is both just and good; so struggling against it would make you evil. If you fear hell, instead of focusing on avoiding death, you would focus on expiating your sins in order to go to heaven.
In addition, some but not all brands of Christianity have the meme that this world is impure, and one should abstain from it, and not be attached to it. Trying to live longer than is natural is attachment to the profane; one should instead spend their lives thinking of God, praying, abjuring the pleasures of the flesh, etc. in order to obtain heaven.
Hypothesis: Religious people (or at least Jews and Christians, which are the religions I’m most familiar with) tend to say that life and death are ultimately in the hands of God/G-d. I suspect this is a way of avoiding survivor’s guilt, though both groups are generally in favor of medicine.
From memory: a news story about a conference on medical ethics where the Orthodox Jews were the only ones in favor of life extension.
I suspect that any religion with a vividly imagined heaven has to have rules against suicide, or else the religion won’t survive. It’s plausible to me that the revulsion against life extension is a mere side effect of the rule against suicide.
This seems strange, I would think an aversion to suicide would make people more pro-life extension.
My hypothesis is that the rule (life and death are in the hands of God) was instituted when suicide was available and life extension wasn’t. Life is in the hands of God wasn’t really relevant, it was just thrown in to make God sound more benevolent (so that He isn’t just killing people) and more powerful.
Hmm. Most of these seem to ignore the fact (not saying YOU are ignoring the fact, but that the religion would have to be ignoring the fact) that there are reasons to extend life that have nothing to do with heaven and hell.
It’s interesting that you mention “trying to live longer than is natural is attachment to the profane”—this strikes me as more Buddhist, but I could see Christians believing that, too. However, if cryo is attachment to the profane, so is eating healthy and exercising. Heck, so is eating at all. I am so glad I’m not religious. It causes such horrible cognitive dissonance to harmonize these types of beliefs with other information I have about life.
Yes—hence the idea of religious fasting. The Catholic and Orthodox Christian traditions consider “mortification of the flesh” to be holy, and luxuries of the flesh (enjoying eating, sex, and bodily sensations in general) to be wicked or at least a dangerous temptation.
I’m signed up for cryo and I don’t want to convince you.
This topic has been discussed to death, both here and elsewhere online. Do you think you’ve brought up any arguments that haven’t been discussed before? Replying to these objections is a waste of time.
In general, “convince me” posts are a bad idea. You’ve got a brain. You’ve got a computer. You’ve got a search engine. Use them. Convince yourself.
That was my first instinct, but then I remembered that there was a consensus in another thread that women are impossible to convince. In that thread, the poster wanted to convince his mom to sign up for cryo but didn’t know how to. A lot of people here might want a chance to figure out how to convince women to get cryo. So instead of convincing myself, I gave them an opportunity to practice on me.
I have no way of knowing that, seeing as how I avoided convincing myself so that other people could experiment on me. I am open to reading articles that people feel are convincing, as I realize that it would be pretty boring to explain the same stuff all over again. It says that in the OP.
It was a rhetorical question. You do have a way of knowing that you haven’t thought of anything new: The idea of cryonics has been around for over half a century. Brilliant and creative minds have explored the argument territory quite thoroughly. You should expect to bring nothing new to the table.
Rant mode engaged.
Your post won’t help us learn how to convince women to sign up for cryonics. The sample size isn’t random and it’s certainly not big enough to draw any useful conclusions from. We’ll just replay some tired replies to some tired objections. At best, it will teach us how to convince Epiphany to sign up.
Most importantly, is there any other area of debate where we use different arguments to convince women? It would be bizarre. This is especially true for a topic like cryonics, where “convincing” mostly involves fielding objections. If you want to convince people, then learn about the topic. When someone brings up a specific objection, you can use your knowledge to construct a reply that’s convincing, informative, and true. It works no matter one’s gender.
Rant mode disengaged.
You seem to be ignorant of what values are. From the point of view of a rationalist, they are axioms, and slippery ones at that as they are axioms elucidated by the individual introspecting his (or her) own emotional reactions to various theoretical situations.
Arguments to convince someone to DO something are tailored to fit the individual being convinced.
Trivial examples of using different arguments to convince women vs men (on average) include arguments to see a particular movie (chick flick vs boobsploitation or violence).
Also, if you guys have already figured everything out, then why is convincing women perceived as extra hard? Obviously something is missing, and that element might be anything from not knowing all of the objections women will make to not having good enough persuasive skills to a seemingly unrelated difference between the genders (maybe it’s that women don’t read as much about technology or that they go to doctors more often and have learned more about the flaws in medical technology, leading to distrust) - but without opening up a line of communication about it, and experimenting to see what kinds of ideas emerge, how are you ever going to make testable guesses about what the missing piece(s) is/are?
Having a detailed map doesn’t mean that a particular route isn’t going to be arduous and fraught with potential missteps that send you down a cliff.
All the more reason to practice on me, then.
Am I correct in reading you to be saying that it’s pretty much a clear case in cryonics’ favor?
If you were to die in a month, with sufficient warning to line up deathbed cryosuspension and all, how likely do you see some form of revival?
Why would you have thought I would have known that?
All I know is that I wasn’t convinced, and people didn’t know how to convince women, and a bunch of people voted in my poll that they thought this was a good topic idea.
You really don’t think anyone here is interested in getting practice? Just about everyone here has family members. I imagine they’ll want them to survive.
Another objection that I don’t see below: it’s pretty unlikely to work. Many things in series have to go right in order for you to get revived. Proponents who take the time to consider what could go wrong come up with chances of success like 1 in 7 to 1 in 435 and 1 in 17.
This depends heavily on assumptions. Consider this : the oldest cryonics patients have survived more than 30 years. The loss per decade for reasonably well funded cryonics organizations is currently 0.
If you check a chart of causes of death, the overwhelming majority of causes are ones where a cryonics team could be there.
You would have to choose a legal method of suicide in some of these cases, however (like voluntarily dehydrating yourself to the point of death), or your brain would deteriorate from progressive disease to the point of probably being non-viable for a future revival.
As for long term risks : ultimately these depend on your perception of risks to human civilization and the chance of ultimately developing a form of nanotechnology that could scan your frozen brain and create an emulation at minimal cost. I personally don’t think there are many probable causes that could cause civilization to fail, and I think the developement of the nanotechnology to be almost certain. There is no future world I can imagine where eventually a commercial or governmental enitity would not have extreme levels of motivation to develop the technology, due to the incredible advantages it would grant.
This is my personal bias, perhaps, but let’s look at this a bit more rationally.
a. How could a civilization ending event actually happen? Are nuclear escalations the most likely outcome or are the exchanges ending at a city or 2 nuked more probable? b. What could stop a civilization from developing molecular tools with self replication? Living cells are an existence proof that the tools are possible, and developing the tools would give the entity that possessed them incredible power and wealth. c. Cryonics organizations have already survived 30 years. Maybe they need to survive 90 or 120 more. They have more money and resources today, decreasing the probability of failure with each year. What is the chance that they will not be able to survive the rest of the needed time? In another 20 years, they might have hardened facilities in the desert with backup power and liquid nitrogen production.
And so on. This is a complicated question, but I have an educated hunch that the risks of failure for cryonics are lower than many of the estimates might show. I suspect that many of the estimates are made by people who suffer from biases towards excessive skepticism, and/or are motivated to find a way to not spend hundreds of thousands of dollars, preferring shorter term gains.
The civilization-ending risks are the most worrying from my point of view. Basically, I see a couple of scenarios:
Technology never gets anywhere near the point where we can revive frozen brains. Industrial civilization collapses first through a combination of resource constraints, environmental damage, and devastating wars; most likely, these all happen together and feed off each other. This doesn’t immediately cause human extinction, but the probability of a future industrial civilization arising from the ruins is very low, because all the easily-extracted fossil fuels, ores etc. have already gone.
Technology continues to advance to a point where revival is becoming distinctly feasible, but such advanced tech also comes with very high and increasing existential risks. For instance genetically-engineered plagues, molecular nanotechnology used as a weapon, strong but unfriendly AI. There is low probability of avoiding all these risks.
It’s a nasty dilemma really, and cryonic revival can only happen if we somehow avoid both horns.
That’s on top of a separate concern that cryo as currently practised simply comes too late to avoid truly irreversible brain damage (what is sometimes called “information theoretic death”). If critical information about a person’s mind has already been lost before freezing then no future technology, however advanced, can restore that mind. I don’t know enough about how minds are stored in brains to answer that concern, but I’m not confident. Freezing immediately on point of bodily death (or shortly before) looks much more likely to work, but it happens to be illegal.
How, precisely, would this happen? We aren’t writing sci-fi here. There’s dozens of countries on this planet with world class R&D occurring each and every day. The key technology needed to revive frozen brains is the development of nanoscale machine tools that are versatile enough to aid in manufacturing more copies of themselves. This sort of technology would change many industries, and in the short term would give the developers of the tech (assuming they had some means of keeping control of it) enormous economic and military advantages.
a. Economic—these tools would be cheap in mass quantities because they can be used to make themselves. Nearly any manufactured good made today could probably be duplicated, and it would not require the elaborate and complex manufacturing chains that it takes today. Also, the products would be very close to atomically perfect, so there would be little need for quality control. b. Military—high end weapons are some of the most expensive to manufacture products available, for a myriad of reasons. (I mean jets, drones, tanks, etc). Nanoscale printers would drop the price to rock bottom for each additional copy of a weapon.
A civilization armed with these tools of course would not be worried about resources or environmental damage.
a. There are a lot of resources not feasible today because we can’t manufacture mining robots at rock bottom prices and send them to go after these low yield resources. b. We suffer from a lack of energy because solar panels and high end batteries have high manufacturing costs. (the raw materials are mostly very cheap). Same goes for nuclear reactors.
c. We cannot reverse environmental damage because we cannot afford to manufacture square miles worth of machinery to reverse the damage. (mostly C02 and other greenhouse gas capturing plants, but also robots to clean up various messes)
I say we revive people as soon as possible as computer simulations to give us a form of friendly AI that we can more or less trust. These people could be emulated at high speed and duplicated many times and used to counter the other risks.
I agree with you entirely on the irreversible brain damage. I think this problem can be fixed with systematic efforts to solve it (and a legal work around or a change to the laws) but this requires resources that Alcor and CI lack at the moment.
“Horn 1” of the dilemma is a Limits to Growth style crisis. It’s perfectly possible that such a limits-crisis arrives before the technology needed to expand the limits shows up to save us. (The early signs would be a major recession which never seems to end, and funding for speculative ideas like nano-machines doesn’t last.) Or another analogy would be crossing a desert with a small, leaky bottle of water and an ever-growing thirst. On the edge of the desert there is a huge lake, and the traveller reaching it will never be thirsty again. But it’s still possible to die before reaching the lake.
I see you think that the technology will arrive in time, which is a legitimate view, but then that also creates big risks of a catastrophe (we reach the lake, and it is poisonous… oops). This is “Horn 2”.
My own experience with exciting new technologies is a bit jaded, and my probability assessment for Horn 1 has been moving upward over the years. Radically-new technology always take much longer to deploy than first expected, development never proceeds at the preferred pace of the engineers, and there can be maddeningly-long delays in getting anyone to deploy, even when the tech has been proven to work. The people who provide the money usually have their own agenda and it slows everything down. Space technology is one example here. Nanotechnology looks like another (huge excitement and major funding, but almost none of it going into development of true nano-machines along Drexler’s lines.)
The two estimates I linked to are both from people who have signed up for it; the second one is Robin Hanson’s. On the spreadsheet, as far as I know the only estimate from someone who has not signed up is mine.
What if I can’t get a good body? (current objection). There are a few variations on this:
I will probably be in old age if I’m frozen, so I might wake up in the future as an old person. If they can make me a young body, that’s not a problem, but should I assume that they’re going to be able to do that? Maybe waking up from cryo in the future will involve being on life support for long periods of time while we’re waiting for the technology for new bodies.
Who is going to pay for my new body? I have no idea what that would cost, so I can’t possibly save for it now, and I’m not sure it’s a good idea to assume that money will be N/A in the future. I’m pretty sure that all my skills would be worthless at that time, but not convinced that there would be money to make me a decent body at that time.
What if I wake up with no body at all… I’m imagining waking up as a head in a jar or a brain in some kind of server rack of brains.
What if the bodies are ill-conceived? I’m imagining waking up as a brain inside of R2D2 and having about the same quality of life as a mobile trash can. If you think this out, being stuck inside of an R2D2 body would be a really, really horrible fate—which I explain here.
There are certain things I’d like to retain the ability to do, and for some of those, I will need to be anatomically correct.
Once again, if I sign up now, I’ll be an early adopter, which may mean that the technology for putting people into new bodies is still experimental and I may end up as a test subject.
Currently, I think most people just get their brains preserved. So they’d have to give you a whole new body or just have you as software anyway.
Early adopter for being preserved doesn’t mean early adopter for being revived. In fact, it probably means the opposite. Since the easiest people to revive will probably be the people preserved with the most advanced technology.
Oh! Good point. Hm. But that might mean that I’m among a group that was using such old technology that it’s more or less arcane by that point… which could mean that there aren’t very many people in my set to revive, and so less leeway to iron out the flaws before they get to me…
Is anyone freezing any lab mice or anything?
I can see myself at the cryo counter: “Hi, I want me and these 100 lab mice frozen.”
Remember: you can always take random recently dead guys who donated their bodies to science, vitrify their brains, and experiment on them. And this’ll be after years of animal studies and such.
You can freeze your pets.
Haha, I hadn’t thought about that.
You are overwhelmingly likely not to wake up in a body, depending on the details of your instructions to Alcor.. Scanning a frozen brain is exponentially cheaper and technologically easier than trying to repair every cell in your body. You will almost certainly wake up as a computer program running on server somewhere.
This is not a bad thing. Your computer program can be plugged into software body models in convincing virtual environments, permitting normal human activities (companionship, art, fun, sex, etc.), plus some activities not normally possible for humans. It’ll likely be possible to rent mechanical bodies for interacting with the physical world.
It is if you want to not die, rather than be copied. How likely would it be, assuming that politics and funding weren’t an issue, that we could grow a new body, prevent the brain from developing, yet keep it alive to the point that an existing brain could be inserted? I’m not necessarily concerned with the details of getting a brain transplant to work smoothly in general, just the replacement body.
It doesn’t seem like it should be difficult in theory; I’d be more worried about the resources.
I’m also curious as to what’s stopping us from keeping brains alive even if the body can no longer function. I’m not well researched in this area, but if it is a matter of keeping chemical resources flowing in and waste flowing out, then our current technology should be capable of as much. At that point, all we’d need is to develop artificial i/o for brains (which seems slightly more difficult, but not so difficult that it couldn’t happen within a few decades).
But I’ve probably overlooked something obvious and well known and am completely confused. :(
I don’t like the idea of being “revived” as an upload, though. An upload would be nice to have (It’d certainly make it easier to examine stored data, if only a little), but I still see an upload as a copy rather than preserving the original. And, being the original, that isn’t the most appealing outcome to me.
A bad body is better than no body at all. It’s not uncommon for abled people to go “Ew, I’d rather die than get $disability”, but when they do… actually I don’t know if they’re as happy as before after 18 months, because everyone mentions that but gives no cite. Anyway, people after a bad event are less unhappy and get happier faster than they predicted, and will remember afterwards. At least for some disabilities, this depends on people adapting to their condition, rather than putting their life on hold until they get better. (More affective forecasting papers.)
Poke around in the disability blogosphere for more perspectives on that. They range from “My body is awesome, but because it’s not the type you build your world for you call it ‘disabled’”, through “It kinda sucks that you’re not an Olympic-level athlete and you don’t obsess over that all the time; I feel the same way about my disability”, through “It’s miserable when you’re not used to it, but once you adapt it’s not so bad”, to “It’s awful, but still better than being dead”.
The things you’re afraid of aren’t even particularly freaky ones: weakness, limited mobility and endurance, need for support systems, body dysphoria, inability to live as you used to. People live with that every day.
I admit I have no idea what would happen if you lacked a body completely. A head-in-a-jar scenario sounds like locked-in syndrome, which is still better than death. The other scenario could be anything from total sensory deprivation (yeah, that one is probably worse than death) to living in a simulation.
Then why do so many people have living wills?
Also, which condition they get. I could see myself happy in this body with a wheelchair, but I can’t see myself happy as a paraplegic. I think my ideas about how happy I’d be with a disability are pretty realistic. Anything that keeps me from communicating would make me miserable. Anything that makes me dependent on others will be stressful. Not being able to walk I could get around—I could still program and make a living, still communicate, still do something of meaning, still get around. How many of the things you enjoy about life and get meaning from are dependent on your body? There are some conditions that would make pretty much everything that’s meaningful and fun about life impossible. See my R2D2 objection.
“Living is always good” / “Any body at all is good”—hasty generalizations, sorry.
From a legal point of view, a living will is not really very like a will. One’s will contains the directions for distribution of one’s property after death. In short, the key focus of a will is financial.
By contrast, a living will is one’s list of instructions regarding medical treatment when one is unavailable to consult (i.e. unconscious). Do-not-resuscitate requests, and the circumstances when one does and doesn’t want particular intense medical interventions. Also, who should make decisions when your pre-made list does not address a particular circumstance. When one is creating a living will, financial considerations might play a part, but the key focus of a living will is medical, not financial.
What if revival technology causes misery? (current objection). There are a few variations on this:
I would be an early adopter, which means that the technology for reviving people might still be experimental at the time when it is used on me. The unintentional result of this could be that I become a test subject.
What if they get reviving my brain slightly wrong and a small change in it’s structure or chemical composition means that all my consciousness is capable of experiencing is ultimate misery, and this goes on for some prolonged period of time where they’re assuming the reason I’m miserable is because of the shock of waking up in a world where so many of the people I know are dead and everything else is changed or gone, so nobody has any idea that it’s due to a chemical or structural problem in my actual brain.
What if I get brain damage or massive memory loss from the procedure? This would mean, essentially that I wasn’t saved. Then I would have to live as a sort of zombie-like horror.
I get some horrible and as yet unimagined disability due to, I don’t know, ice crystals destroying my tissues or amine accumulation or something unexpected.
Just because cryo is the only way we currently have to avoid death, that doesn’t mean it’s a good way.
What the heck? What if any technology X causes misery? It was argued that in vitro fertilization would cause soulless humans to be born (seriously) with all sorts of ramifications (from them destroying society, to their existence being constant agony). This claim has been made repeatedly about all sorts of medical interventions, from organ transplants to cloning. Right now there are people who claim aspartame is turning us into zombie-like horrors.
There is always a risk from any medical intervention. A bad anesthesiologist can give you brain damage and turn you into a zombie when all you wanted to was have a wisdom tooth pulled. This objection is so generalized that I’m not sure it’s a true objection at all. I think you may be searching for other objections rather than stating a true objection.
Did someone actually suggest that? A cursory glance through some articles shows, for instance, the Pope expressing worry that women would be used as ‘baby factories’, but questions about IVF seem to have, historically, been tied up with worries about custom-designed people.
Yes. I suppose it depends a bit on how official you require the suggester to be before you’re willing to grant that it was a legitimate social discourse. A few examples:
Cathy Lynn Grossman of USA Today’s “Faith & Reason” column asked her readers “Do you think a baby conceived in test tube is still a child in the eyes of God?” in 2010.
People have reported asking priests for advice and being told ” He told them that if they were to go ahead with it, they would be doing something worse than abortion, their child would be born without a soul as he or she would be manmade and not Godmade.”
There’s various crazy ministries on the internet that make/made the soulless claims as well.
So yes, someone did actually suggest that. Multiple someones. How much they count is debatable.
Yeah, I don’t mean the crazy-ministry people, I mean people connected enough to reality that they wouldn’t say that sort of thing now, but who did right up to the point where normal human babies showed up and the position became unsupportable.
Maybe I’m looking too far into this, but I’m trying to understand how you could look at a person pretty much indistinguishable from other people and claim that they have all of these hilariously weird properties. I can see if happening if people conceived via IVF all had red hair or something, but people did know these would be, y’know, people conceived in-vitro, right?
/shrug. The concept of souls is unsupportable right now but it doesn’t stop anyone from claiming all sorts of hilariously weird properties for them. I don’t know how hard it would be to say that one person has an unsupportable property X and another person doesn’t, since they’re both just naked assertion anyway. When your references are that detached from reality you can start saying all sorts of nonsensical crap.
There’s no reason to experiment o cryo patients. Lots of people donate their brains to science. Grab somebody who isn’t expecting to be resurrected, and test your technology on them. Worst case, you wake up somebody who doesn’t want to be alive, and they kill themselves.
Number two is very unlikely. We’re basically talking brain damage, and I’ve never heard of a case of brain damage, no matter how severe, doing that.
As for number three, that shambling horror would not be you in a meaningful sense. You’d just be dead, which is the default case. Also, I have my doubts that they’d even bother to try to resurrect you with that much damage if they didn’t already have a way of patching the gaps in your neurology.
As for number four, depending on the degree of the disability, suicide or euthanasia is probably possible. Besides, I think it’s unlikely they’ll be able to drag you back from being a corpsicle without being able to fix problems like that.
There’s no way not to. It will be a new technology. Somebody has to get reanimated first. Even if we freeze 100 mice to test on, or monkeys, reviving humans will be different. Doing something for the first time is, by it’s very nature, an experiment.
Awful! That’s experimenting on a person against their will, and without their knowledge, even! I sure hope people like you don’t start freezing people like me in the event that I decide against cryo...
People experience this every day. It’s called chemical depression. Even if you don’t currently see a way for preservation or revival technology to cause this condition, it exists, it’s possible that more than one mechanism may exist to trigger it, and that these technologies may have that as an accidental side-effect.
Uh… no, because I’d be experiencing life, I would just be without what makes me me. That would be horror, not non-existence. So it is not death.
Is it now? Most people don’t believe in the right to die. In a world where we had figured out how to reanimate preserved corpses, do you think that they’ll believe in the right to die? They’ll probably automatically save and revive everyone.
-shrug- so don’t leave your brain to science. I figure if somebody is prepared to let their brain decompose on a table while first year medical students poke at it, you might as well try to save their life. Provided, of course, the laws wherever you are permit you to put the results down if they’re horrible. Worst case, they’re back where they started.
Chemical depression is not ‘absolute misery.’ Besides, we know how to treat that now. That we’ll be able to bring you back, but unable to tweak your brain activity a little is not very credible. Worst case, once we have the scan, we can always put it back on ice for another decade or two until we can fix the problem.
If I took a bunch of Drexler-class nanotech, took your brain, and restructured its material to be a perfect replica of my brain, that would be murder. You would cease to exist. The person living in your head would be me, not you. If brain damage is adequately severe, then you don’t exist any more. The ‘thing that makes you you’ is necessary to ‘do the experiencing.’
See disability arguments on the other comment for personality-preserving brain damage.
Well, no. You’d just be dead. There’d be a Schiavo-like body looking like yours, or a new person in a body looking like yours, but that doesn’t seem to add much to the horror of death.
That sounds like a weird change. Right now the DSM allows a depression diagnosis two months after a traumatic event, less if it gets really bad, and even less in practice. How prolonged are you thinking of?
People who age often get depression, and get the worst disabilities because they can’t adapt fast and their disabilities keep increasing. Do you accept “I should kill myself now, so I don’t run that risk”? If not, how is that different.
This thing that would not die though, this ability to know pain and pleasure, this continuing experience, it would remain in the event that my memories were all gone, presumably. THAT is the part I’m worried about. That the part of me that feels could wake up and have to go through the experience of realizing that who I am has been lost to brain damage.
Is this supposed to rebut my objection? I don’t see where you’re going with this at all.
Right now, there isn’t a guarantee that I’m going to go through a medical procedure anytime soon. Going through a medical procedure, especially one that is new, or one that few people have been through, is likely to cause some sort of horrible side effects. We have no reason to assume that this technology will be flawless by the time we get to use it, no reason to believe it won’t turn us into horrors.
It’s different because not killing myself right now leaves me with a reasonable chance to have some number of happy years ahead whereas going through a medical procedure with unexpected side effects and risks may have a much greater chance of making me completely miserable for a long time.
I think our disagreement may have a lot to do with how much faith we place in the medical establishment.
If you haven’t got experience with it, you can’t know how bad it can be. Have you ever looked into how incompetent and horrible medical professionals and treatments can be?
I have a pile of statistics if you want a shock.
Okay, that’s freaky. Only a little freakier than “The child I was has been replaced by an adult”, but point taken.
If medicine when you wake up if anything like it is now, after a couple months at most you’ll be able to say “Doc, I feel utterly miserable” and the doc will answer “One box of magic future antidepressants, coming right up!”, not (only) “Well duh it’s future shock”.
Only in specific cases (medical errors, psychiatric hospitals, nursing homes). Can I haz stats?
My first impression of cryo (documentation): My introduction to cryo was in a cartoon as a child—the bad guys were freezing themselves and using the blood of children to live forever. I felt it was terrifying and horribly unfair that the bad guys could live forever and very creepy that there were so many frozen dead bodies.
There’s a common attitude that eternal life is a very special prize—something a few great heroes might deserve, and if you seek it out you’re basically claiming to be a deity or something impossibly high-status along those lines. I have no idea where that comes from; it’s like someone proposed advances in agriculture and people went “But famines are part of life!”.
Possibly related: Survivor guilt
I guess that if you survive and other people don’t, it instinctively pattern-matches to you causing their death. Even if it does not make sense, and you know it. Maybe it’s a broken algorithm for determining outside view—if you go somewhere with a group of people, you return and they are dead, you should expect other people to suspect you; therefore you’d rather show some extremely strong self-destructive emotion to convince them game-theoretically that you did not benefit from that outcome.
If we get immortality, we can expect a lot of survivor guilt. Also, it will seriously ruin the just world hypothesis, if some people will get 3^^^3 more utilons just for the fact they were born in the right era and did not die randomly a few years sooner.
Hmmm. These are really good points. I do feel guilty about the idea of living a really long time while a lot of others don’t. That may be what triggered my first big objection—that you could save a lot of people with that money. Now I wonder if that objection was a rationalization of some type of survivor’s guilt. I think that this is likely. Very good point. Now I’m wondering what the nature of this survivor’s guilt is, for me.
I still feel survivor’s guilt, actually. Even though it’s not attached to a specific objection any longer—the objection about saving starving children has been rebutted.
New objection—Survivor’s Guilt
That’s already seen as a fallacy isn’t it?
Well, more relevantly it’s seemingly assumed by most of the population.
Haha, good point. We should not see it as high-status. If it works, it’s something everyone should get. That’s a really good observation.
What cartoon was this?
No, seriously, what cartoon is this. It sounds awesome.
Has anyone on Less Wrong considered (and answered) an anthropic objection to cryonics? It might go something like this:
“If cryonics works, then the society in which I am revived will be a transhuman/posthuman one with very advanced technology, and a very large number of observer moments. But if such societies existed in the universe, or ever came to exist, then I would expect to find myself already part of one, and I don’t. (Note here the use of Bostrom’s strong self-selection assumption or SSSA.) Therefore I should judge it unlikely that posthuman/transhuman societies exist or will come to exist. Therefore I should judge it unlikely that cryonics will work.”
One counter-argument (which Bostrom himself might use) would be based on reference classes. Perhaps I’m currently in a limited reference class that precludes me being part of a transhuman/posthuman society. But this also has a dubious implication for cryonics, since for cryonics to work it must be possible for me to change that reference class, moving from a very small one to a much larger one. So again, wouldn’t I expect to have already done that?
You should provide an argument as to why it would be more likely to be born into a post human society. For a post human society to exist, a human and pre-human world would probably, although perhaps not necessarily, have to exist first. Even if it is more common to exist in transhumance state, there would still be non-transhuman minds.
This is just an application of the “Self-Sampling Assumption” (or “principle of mediocrity”).
There will be many more observer moments in a “post-human” society than a “pre-posthuman” one (at human level or lower), because the population is much larger and observers live longer. So if the universe contains both sorts of society, a typical observer (or observer moment) would be much more likely to be in a “post-human” society. If the universe only contains “pre-posthuman” societies (e.g. because societies self-destruct before reaching a post-human level of technology) then an observer would have to be in one of the “pre-posthuman” ones because there aren’t any others.
I’d suggest you look at Nick Bostrom’s web-site for more details, including his discussion on reference classes.
P.S. It is also possible to use the “Self-Indication Assumption” as an alternative to the “Self-Sampling Assumption”. Or to use a non-anthropic model like “Full Non-Indexical Conditioning”. But these don’t get rid of the argument that we are unlikely to turn into a post-human society, and for some rather interesting reasons, which Katja Grace discusses here
So far, I think reference classes are the only counter-argument that might work.
I figured the reasoning behind that. I just thought it would be a good idea for you to post the explanation with your argument.
Ah, thanks!
What if the future is hellish and I won’t be able to die? (Current objection)
I realize there are lots of interesting technologies coming our way, but there are a lot of problems, too. I don’t know which will win. Will it be environmental collapse or green technology? FAI or the political/other issues created by AI? Will we have a world full of wonders or grey goo? Space colonies or alien invasions? As our power to solve problems grows, so does our ability to destroy everything we know. I do not believe in the future any more than I believe in heaven. I recognize it as a potential utopia / dystopia / neither. I do not assume that the ability to revive preserved people would make us utopia-creating demigods any more than our current abilities to do CPR or fly make our world carefree.
A new twist, waking up into this world, would be that I may not be able to die. The horrors that I could experience in a technologically advanced dystopia might be much worse than the ones we have currently. Dictators with ufAI armies, mind control brain implants, massive environmental and/or technological catastrophes.
There is one thing worse than dying, and that’s living an unnaturally long time in a hellish existence. If I sign up for cryo, I’ll be taking a risk with that, too.
From this post (which is a great source of insight on many particular cryonics objections):
(This comment, including copying over text and links, was composed entirely without the mouse due to the Pentadactyl Firefox extension.)
That scenario is full of fail in terms of helping someone to weigh the issue in an ecologically valid way. Answers to the the trolley problem empirically hinge on all kinds of consequentially irrelevant details like whether you have to physically push the person to be sacrificed. The details that matter are hints about your true rejection and handling them in a sloppy way is less like grounded wisdom and more like a high pressure sales tactic.
In this case, for example, “leaving the building” stands in for signing up for cryonics, and “everyone else safely leaving the building” is the reason your unconscious body won’t be dragged out to safety… but that means you’d be doing a socially weird thing to not do the action that functions as a proxy for signing up for cryonics, which is the reverse of the actual state of affairs in the real world.
A more accurate scenario might be that your local witch doctor has diagnosed you with a theoretically curable degenerative disorder that will kill you in a few months, but you live in a shanty town by the sea where the cure is not available. The cure is probably available, but only in a distant and seemingly benevolent country across the sea where you don’t speak the language or understand the economy or politics very well. The sea has really strong currents and you could float downstream to the distant civilization, but they can’t come to you. You have heard from some people that you have a claim on something called “government assistance checks” in that far nation that will be given initially to whoever is taking care of you and helping you settle in while you are still sick.
You will be almost certainly be made a ward of some entity or another while there, but you don’t understand the details. It could be a complex organization or a person. There might be some time waiting for details of the cure to be worked out and there is a possibility that you could be completely cured but that this might cost an unknown amount of extra money, and its a real decision that reasonable people could go different ways on depending on their informed preferences, but the details and decisions will be made by whoever your benefactor ends up being.
That benefactor might have incentives to leave you the equivalent of “a hospital bed in the attic” for decades with lingering pain and some children’s books and audio tapes from the 1980′s for entertainment, pocketing some of the assistance checks for personal use, with your actual continued consciousness functioning as the basis of their moral claim to the assistance checks, and your continued ignorance being the basis of their claim to control the checks.
If you get bored/unhappy with your situation, especially over time, they might forcibly inject you with heroin or periodically erase your memory as a palliative. This is certainly within their means and is somewhat consistent with some of the professed values of some people who plan to take the raft trip themselves at some point, so there might actually be political cover for this to happen even if you don’t want that. Given the drugs and control over your information sources, they might trick you into nominally agreeing to the treatments.
You don’t get to pick your benefactor in advance, you don’t know what the details of the actual options that will exist to do the cost/benefit yourself in advance, and you don’t know what kind of larger political institutions will exist to oversee their decision making. You’d have to build your own raft and show up on their shores as a sort of refugee, and your family is aware of roughly the same things as you, and they could use the raft making materials to build part of a new shack for your sister, or perhaps a new outhouse for the family. Do you get on the raft and rely on the kindness of strangers or accept your fate and prepare to die in a way that leaves less bad memories for your loved ones than the average death.
Also, human nature being what it is, if you talk about it too much but then decide to not build the raft and make the attempt, then your family may feel worse than average about your death, because there will be lingering guilt over the possibility that they shamed or frightened you into sacrificing your chances at survival so that they could have a new outhouse instead. And knowing all of these additional local/emotional issues as well as you do, they might resent the subject being brought up in a way that destabilizes the status quo “common knowledge” of how family resources will be allocated. And your cousin got sick from an overflowing outhouse last year, so even though it sounds banal, the outhouse is a real thing that really verifiably matters.
That is an awesome metaphor :)
Each of the questions in that post was meant to address one argument against cryo. The argument ‘hardly anyone I know will be alive when I’m revived’ is addressed by the second question. (Which I would answer “I don’t know, I’d have to think about it”, BTW.)
[realizes he has been rationalizing] Oh...
Here’s the reason I don’t find this very scary. As a frozen person, you have very little of value to offer people, and will probably take some resources. Thus, if someone wants to bring you back it will likely must be mostly for your benefit, rather than because they want to enslave you or something. If the universe just has people who don’t care about you, then they just won’t revive you, and it will be the same as if you had died.
In order for you to be revived in a hellish world, the people who brought you back have to be actively malicious, which doesn’t seem very likely to me.
What do you think?
Many among us will spend the better part of a million dollars to preserve the life of children born so deformed and disabled that they actually will spend a significant amount of their lives in pain and the rest of it not being able to do much of what gives the rest of us pleasure or status. You don’t have to be actively malicious to think that life at any cost is a Good Thing (tm).
There’s also the theoretical possibility that the world you are revived in to is perceived as a good one by the people born in to it, but is too hard to adjust to for a very old person from a very different world. I doubt the majority of slaves would prefer death to the lives they had, but someone who had lived 80 years in freedom and the best the 21st century could offer in terms of material comforts might not be as blase about a very different status quo in the future.
If the people reviving you are not malicious then you would expect to have the option of dying again unless they don’t believe you that your life sucks too much.
Also the psychology of happiness seems to suggest that people adjust pretty well to big life changes.
Unless you are defining malicious to mean “lets me kill myself if I want to,” then being revived into a society with similar laws and values as the current U.S. would certainly make it illegal for you to kill yourself. Most of us realize we could do it if we wanted anyway, but a society that can revive you probably has more effective means of enforcing prohibitions. Even now, we already have “chemical castration” for some sex criminals.
Okay, that’s a good point. (I assume you meant “defining ‘not malicious’ to mean ‘lets me kill myself...’”)
They might also be high-functioning but insane, from some of the very many ways tech at the level of mucking around with physical human brains to the degree of successfully reanimating cryonics patients can go wrong. With the original imperative to revive cryonics patients intact, the ability to do so also somehow intact, but things being very, very wrong otherwise.
I think “you might wake up in hell” is actually one of the better arguments for opting out of cryonics, since some of the sort of tech you need to revive cryonics patients is also tech you could use to build unescapeable virtual hells.
Although the hellish world scenario seems unlikely it might be important to consider. At least according to my own values things like being confined to children’s books and being injected with heroin would contribute very little negative utility (if negative at all) compared to even 1 in 1000 of enduring the worst psychologically possible torture for, say, a billion years.
Ok, the cost benefit ratio between reviving someone and profiting off of their slavery might be worth considering. I’m not sure how many resources it would take to revive me or if it would be safe to assume that my brain’s abilities (or whatever was valued) would not outweigh the resources required to revive me but it seems likely now that I think of it, especially considering that all my skills would be out of date and they’d probably have eugenics or intelligence enhancers by then which would outdo my brain.
Also, the people who enslaved me would not have to be the same ones as the people who revive me. They would not be subject to the cost-benefit ratio. The people who revive me could be well-meaning, but if the world has gone to hell, there might be nothing they can do about bad entities doing horrible things.
The reviver may only revive me because they’re required to, because the company storing me has a legal agreement and can be prosecuted if they don’t. The timing of my revival may be totally arbitrary in the grand scheme of things. It might have more to do with the limit for how long a person can stay in cryo (Whether that means a tangible one, or my account runs out of money with which to stay frozen or they reach some legal limit where they’re forced to honor my contract) than with the state of the world at that time.
I don’t assume that there would be a benevolent person waiting for me. There’s just too much time between here and there and you never know what is going to happen. Maybe none of my friends sign up for cryo. Maybe there’s only a 1 in 10 chance of successful revival and I’m the only one of my group who makes it.
So, I’m not convinced that the world will not have gone to hell or that I’ll be revived by friends, but I think slavery is less likely.
Consider that you might reach such a future in your natural lifespan, without cryonics. Does this cause you to spend resources on maintaining a suicide button that would ensure information-theoretical erasure of yourself, so no sudden UFAI foom could get hold of you? If not, what is the difference?
It’s not quite information-theoretical, but does a snub nose .357 count? I carry because statistically the safest thing to do as the attempted victim of a violent crime is to resist using a firearm.
[EDITED to add: oops, I completely misinterpreted what Decius wrote. What follows is therefore approximately 100% irrelevant. I’ll leave it there, though, because I don’t believe in trying to erase one’s errors from history :-). Also: I fixed a small typo.]
Assuming this isn’t a statistical joke like the one about always taking a bomb with you when you fly (because it’s very unlikely that there’ll be two bombs on a single plane) … do you have reason to think that having-but-deliberately-not-using the firearm actually causes this alleged improved safety?
It seems like there are some very obvious ways in in which that association could exist without the causal link—e.g., people are more likely to be able to resist when the danger is less, people who are concerned enough about their safety to carry for that reason but sensible enough not to shoot are also more likely to take other measures that improve their safety, etc.
Who said anything about not using? I have never seen statistics regarding outcomes of victims of violent crime having a firearm but never drawing it.
There could be other confounding factors as well, like underreporting by people who are mugged, cooperate, and experience no injury; or a tendency among people who carry legally to know how to use their weapons better than criminals and typical people; or difficulty determining whether a dead victim resisted or not. But the statistics aren’t even remotely vague: Among reported victims of violent crime, a larger percentage of those who cooperated with the criminal died than those who resisted the crime using a firearm.
Not that something already known would be able to prevent a post-singularity hostile AI from accomplishing the goals it has, much less a firearm that has about as long an effective range when fired as when performing a lunging swing.
D’oh. I completely misinterpreted what you wrote: “to resist-using a firearm”, rather than “to resist, using a firearm”.
Sorry- my original phrasing is ambiguous to someone who doesn’t already know what I’m saying.
Interesting. Do you have a source on that?
Kleck G. Point Blank—Guns and Violence in America. New York, NY, Aldine De Gruyter, 1991.
Tangent: Do you have a link to a study that backs this up? I’m very interested in it. EDIT: Arg, serves me right for not reading more downthread.
Read this hypothetical objection:
Does this objection strike you as reasonable, or unreasonable?
If a copy of me were made, would this instance of me experience the next instance’s experiences? I don’t think so. As far as whether I could suffer from being re-created, I doubt that. However, I’d be very concerned about future instances of me being abused, if I thought there were an interest in reviving me. If I was famous, I’d be concerned that fans might want to make a bunch of clones of me, and I’d be concerned about how the clones were treated. Unless I had reason to think that A. People are going to reconstruct me against my will and B. The people reconstructing me would do something unethical with the clones, I wouldn’t worry about it.
Why do you ask?
From the perspective that you are your instances, it matters because if you fear being abused, you would fear any instance of you being abused. You wouldn’t want to walk into an atomically precise copying machine with the knowledge that the copy is going to be used in cruel experiments.
The question becomes, where do you draw the line? Is a rough copy of you based on your facebook posts and whatever advanced AI can extrapolate from that just as worthy of you anticipating their experiences? Or perhaps you should fear ending up as that person on a relative scale depending how similar it is—if it is 50% similar, have 50% of the fear, etc. Fear and other emotions don’t have to be a simple binary relationship, after all.
Empathy is an emotion that seems to be distinctly different (meaning, it feels different and probably happens differently on a biochemical and neurological level) from the emotion of actually anticipating being the individual. So while yes I would feel empathy for any abused clones that applies regardless of the degree to which I have fear of waking up as them, it would not be the only emotion because I believe I would be the clones. Any information I had that indicates that clones might be abused in the future becomes much more near to me and I am more likely to take action on it if I think it likely that I will actually be one of them.
Thus if you think the future is bad in a way that prohibits wanting to wake up from cryonics to any serious degree, then it might be smart to be concerned for the safety of clones who could be you anyway. Since you haven’t stated a desire to be cloned, being cloned against your will is more likely to be carried out by unethical people relative to ethical people, so even if the prospect is fairly remote it is more worrying than the prospect with cryonics, where caring people must keep you frozen and do have your consent to bring you back.
I fear a rough copy of myself made from my facebook posts (and lesswrong comments) being tortured about as much as I fear an intricate 3d sculpture of me being made and then being used as a target in a gun range. Is that really just me?
Nope, I’d feel the same. I think I would like to hang out with a rough copy of myself made from my internet behaviour, though.
Hmm. How do you feel about the prospect of an atomically precise copy of yourself being used as a living target at a gun range?
Is my corpse an atomically precise copy of myself? I wouldn’t care much about that.
If you mean the classic sci-fi picture of an exact and recent clone of myself, I would certainly prefer that a copy of myself be used at a gun range than that a copy of my daughters or a few of my relatives be used. And certainly prefer that a copy of myself be used than that the single original of any of my relatives be used.
It is an ironic thing that a rationalist discussion of values comes down to questions like “how do you feel about...” Personally, much of my rational effort around values is to make choices that go against some or even many of my feelings, presumably to get at values that I think are more important. I highly value not being fooled by appearances, I highly value minimizing the extent to which I succumb to “cargo cult” reasoning. I’m not sure how much identifying myself with a copy of myself is valid (whatever that means in this context) and how much is cargo cult. But I’m pretty sure identifying myself with my corpse or a caricature of myself is cargo cult.
If you undergo dementia or some other neuro-degenerative condition for a few years, it will turn you into a very different person. A “rough” copy made from information mined from the internet could perhaps be much closer than this to the healthy version of the person than the version kept alive in a nursing home in their later years. Because of this argument, I don’t see how you can come to the conclusion that identifying with a “caricature” is cargo-cult by definition.
Your corpse is definitely not an atomically precise copy of yourself. Corpses are the subject of extensive structural damage which makes their state of unconsciousness irreversible. If this were not the case, we would neither call them corpses nor consider it unreasonable to identify with them.
A more interesting grey area would be if you were subjected to cryonics or plastination, copied while in a completely ametabolic and unconscious state, and then reanimated. You could look across at a plastic-embedded or frozen copy of yourself and not even know if they are the original. In fact, there could be many of them, implying that you are probably not the original unless you can obtain information otherwise.
If you value your original self sufficiently, that seems to imply that if say you wake up in a room with 99 other versions of you still in stasis and have a choice to a) destroy them all and live or b) suicide and reanimate them all, you should pick suicide in advance so that it becomes 99% likely your copy will pick that option.
On the other hand if you don’t care whether you are the original or a copy you can destroy all those nonsentient copies (99% chance of including the original) without worrying about it.
I’ve had success explaining cryonics to people by using the “reconstruct” (succinct term, thank you!) spectrum—on one end, maybe reconstruction is easy, and we’ll all get to live forever. On the other end, maybe it’s impossible, and you simply cannot spend more than a few days de-animated before being lost forever. In the future, there will be scientists who do research and experiments and actually determine where on the spectrum the technology actually is. Cryonics is just a particular corpse preservation method that prepares for reconstruction being difficult.
More succinctly, cryonics is trying to reach the future, and this hypothetical objection is trying to avoid the future.
I asked because it seemed that, if a fear of bad future is a reason not to try harder to reach the future, it should also be a reason to try harder avoid the future, and I was curious to examine this fear of the future.
Unexpected consequences (current objection):
There must be psychological consequences (waking up in a world where your skills are all useless and everything has changed), environmental consequences (a bunch of people being frozen aren’t going to have zero environmental impact), medical consequences (revival may not go as expected, there are probably risks) and possibly completely unexpected consequences (akin to the tumors x-ray technicians got because they were testing the x-ray machines on their hands every day to make sure they were warmed up).
Can anyone recommend good reading materials on these?
I don’t have reading material on these, but there are unexpected consequences to anything we do. Should we stop using electricity because there could be unexpected consequences to it?
More to the point, most of these are possible consequences of simply continuing to live. Two centuries from now it’s likely most of your current skills will be useless and everything will have changed. Living for an extra century will not have zero environmental impact. Etc. Is the best solution to these problems personal annihilation? Is that even in the top ten? There are better ways of solving these problems than death.
If the chances of death is high, why unexpected consequences would be a objection?
The main reason to not sign is the low probability of sucess, in cases where people already know what cryo is, and have the money. If they will die anyway, losing money now makes cryo a bad investiment.
If you wake up not too severely damaged and in a decent environment (possibly with all kinds of wonderful improvements) where your life wil be better than non existence you will have a lot more time for living. If not you can always kill yourself.
If you get yourself frozen only for revival upon major life extension breakthroughs as well as unfreezing damage repair etc the important possibilities for the revival are probability of happy revival vs probability of unhappy revival where you can’t kill yourself.
I’m not aware of there ever having been any actual supervillains. I’m aware people are enslaved and forbidden from killing themselves but almost never are they actually prevented from doing so. Who cares about their slaves little enough to forbid them from killing themselves but enough to diligently enforce the rule (unless you are short on slaves which anyone with the resources to revive you to enslave you wouldn’t be)
Having to kill yourself would suck but it puts a comparitively low cap on your max loss in the vast majority of scenarios. I’m not sure it can even be called a loss as it replaces having to die of old age or illness in the scenario where you don’t freeze yourself.
Also you are probably underestimating the extent to which advancements over the years would improve your quality of life.
While the possibility of the bad scenarios does reduce the expected value of freezing it’s on a different order of magnitude to the potential benefits because the vast majority of the bad scenarious can be opted out of.
One thing behaviorally close to actual supervillains is bureaucracy.
So the realistic antiutopian scenario is that you are revived by employees of some future Department of Historical Care. Personally, those people don’t care about you at all; just are just another prehistorical ape for them. All they want is to have their salaries, with as little work as possible.
They don’t care about costs of your revival, because those costs are paid by state; by taxes of citizens who get some epsilon warm fuzzies for saving prehistorical people. They don’t care about your pain, because emotionally you mean nothing for them; they emotionally don’t even consider you human. But they do care about your life—because their salaries depend on how many revived prehistorical people will survive. So their highest priority is to prevent your suicide; and they can use the technology of future for this; for example they can prevent you any movement and feed you intravenously.
People outside the Department of Historical Care will not save you, because they honestly don’t care about you. They get some warm fuzzies from knowing that you are alive (and imagining how grateful you must be for this), but they have no desire to meet with you personally. It’s a future, where they have things much more interesting than you; for example genetically engineered pokemons, artificial intelligences, etc.
And you might have to keep replaying the more interesting (that is, painful) parts of history.
Not if you don’t have courage to do such things. Not if you wake up damaged and unable to access / use suicidal weapons. Not if you wake up as a subject of medical experiments. Being a slave isn’t the only horrible outcome that could happen.
Prisoners are generally prevented from killing themselves, as are the insane. What if the society of the future simply thinks it’s wrong for you to kill yourself and won’t let you do it?
There’s a general category of waking up to find yourself in a low-status situation. This would include slavery, torture, imprisonment (we don’t know what they’ll consider to be a crime), and the one I think is most likely—that you’ll simply never be able to catch up. If you’re going to be you, you’re going to have a mind which was shaped by very different circumstances from the people in the future. Life might be well worth living or intermittently well worth living, but you will never be a full member of the society.
Is there any science fiction about fairly distinct cohorts of people from different times in a high-longevity and/or cryonics society?
If you’re revived via whole brain emulation (dramatically easier, and thus more likely, than trying to convert a hundred kilos of flaccid, poisoned cell edifices into a living person), then you could easily be prevented from killing yourself.
That said, whole brain emulation ought to be experimentally feasible, in what, fifteen years? At a consumer price point in 40? (Assuming the general trend of Moore’s law stays constant). That’s little enough time that I think the probability of such a dytopian future is not incredibly large. Especially since Alcor et all can move around if the laws start to get draconian. So it doesn’t just require an evil empire—it requires a global evil empire.
The real risk is that Alcor will fold before that happens, and (for some reason) won’t plastinate the brains they have on ice. In which case, you’re back in the same boat you started in.
Maybe, but scanning a vitrified brain with such a high resolution that a copy would feel more or less like the same person might take a bit longer.
Most of the sensible people seem to be saying that the relevant neural features can be observed at a 5nm x 5nm x 5nm spatial resolution, if supplemented with some gross immunostaining to record specific gene expressions and chemical concentrations. We already have SEM setups that can scan vitrified tissue at around that resolution, they’re just (several) orders of magnitude too slow. Outfitting them to do immunostaining and optical scanning would be relatively trivial. Since multi-beam SEMS are expected to dramatically increase the scan rate in the next couple of years, and since you could get excellent economies of scale for scanning on parallel machines, I do not expect the scanners themselves to be the bottleneck technology.
The other possible bottleneck is the actual neuroscience, since we’ve got a number of blind spots in the details of how large-scale neural machinery operates. We don’t know all the factors we would need to stain for, we don’t know all of the details of how synaptic morphology correlates with statistical behavior, and we don’t know how much detail we need in our neural models to preserve the integrity of the whole (though we have some solid guesses). We also do not, to the best of my knowledge, have reliable computational models of glial cells at this point. There are also a few factors of questionable importance, like passive neurotransmitter diffusion and electrical induction that need further study to decide how (if at all) to account for them in our models. However, progress in this area is very rapid. The Blue Brain project alone has made extremely strong progress in just a few years. I would be surprised if it took more than fifteen years to solve the remaining open questions.
Large scale image processing and data analytics, for parsing the scan images, is a sufficiently mature science that it’s not my primary point of concern. What could really screw it up is if Moore’s law craps out in ten years like Gordon Moore has predicted, and none of the replacement technologies are advanced enough to pick up the slack.
WRONG! If they’re able to re-animate preserved people, what makes you think they won’t be able to prevent suicide?
What if they don’t believe in a right to die? There’s no guarantee that you’ll be able to die, if you wake up in a world where cryo revival actually worked.
Or, if I woke up disabled or in an R2D2 robot body, how would I actually go about killing myself? I mean, you can say “roll off a cliff” but if there are no cliffs nearby, or the thing is made out of titanium?
There is no guarantee I’d be able to die in that scenario.
I think you’re underestimating the extent to which advancements may cause catastrophes. We made all these chemicals and machines, now the environment is being destroyed. We made x-ray machines, the first techs to use them used to x-ray their hands to see if the machine was on in the morning—you can imagine what resulted. We’ve learned a lot about science in the last 100 years, great, but now we have nuclear bombs. We may make AI, and there are about 10,000 ways for that to go wrong. I don’t assume technological advancement will lead to a utopia. I hope it does. But to assume that it will is a bad idea. I’d be very interested to see a thorough and well thought out prediction of whether we’ll have a utopia or dystopia in the future, or something that’s neither. I’m really not sure.
Worse : a sensible system would in fact not ONLY give you a “robot body made of titanium” but would maintain multiple backup copies in vaults (and for security reasons, not all of the physical vault locations would be known to you, or anyone) and would use systems to constantly stream updated memory state data to these backup records. (stored as incremental backups, of course)
More than likely, the outcome for “successfully” committing suicide would be to wake up again and face some form of negative consequences for your actions. Suicide could actually be prosecuted as a crime.
Great post Epiphany. I’d like to volunteer myself as another guinea pig, but with one caveat. Rather than having this experiment end with just two people’s opinions being changed, I’d like to create an argument map for the best arguments on cryonics so that more people can be persuaded by the best arguments that we can aggregate into an argument map.
There are a lot of argument mapping tools out there, but my favorite one isn’t actually intended to be used as an argument map. I created a rough sketch of an argument map on cryonics.
I am planning to put a list of my objections with links and whether they’re resolved into the OP. So there will be some organization to it.
I’m not sure that LW wants more guinea pigs, some feel that this is a waste of time—you can tell by the karma on my thread that this isn’t really popular. Thanks for the compliment, though.
Also, I am not expecting to be convinced. I’m actually leaning toward “no” right now, as surprising as I bet some think that is. I’ll explain that when I make my next run of responding to comments again.
You know, I think we should argument map the whole friggin site. Except that I WOULD NOT want to see that being put onto someone else’s software. They’ll have control of the data. I’d prefer to see it in open source software, editable by the world, and copyright free so anyone can make it backup without a problem.
If I may ask you something; as you write out your various objections here, if you were to consider, on the one hand, the risk of whatever unpleasantness arises from that objection, and on the other hand, that if you don’t take that risk, you will be permanently and irrevocably dead… do you really feel that you’d rather be dead than take that particular risk?
To me, death is merely non-existence. I won’t suffer after that. I won’t know that I’m dead.
After you’re dead, no; but the you of this moment can look forward at the various possible futures, and make choices that make some of those futures more likely than others. One of your objections was to being put in an R2D2 body—so imagine that you, right now, have a choice to make. One choice is that you end up permanently dead. The other choice offers you a chance at life, but with, say, a 5% chance of being put into an R2D2 body.
Are you so certain that such an existence is so terrible, that even a remote chance of it is a worse fate than total oblivion?
Ok, I won’t be able to speak, enjoy food, express emotion, have sex or do any of the things I normally do with my hands. I would be severely disabled. That would be almost like being a paraplegic but with wheels. And I might not be able to see or hear well (does R2D2 have the ability to enjoy HD quality or is it more like recognizable blurs and discernible murmurs?).
What the hell would I realistically do with myself if I couldn’t even communicate? I find meaning in doing constructive projects. Where would I find meaning in a body like R2D2? Without the ability to experience even sensory pleasures, I would become so bored. Imagine staring at a wall for a whole week. That’s how I think it would feel to be trapped in an R2D2 body—but maybe I’d be stuck like that for years.
If you’ve looked into the concept of “flow” (From the book “Flow: The psychology of optimal experience.”) you’ll know that not being able to do activities that provide an appropriate challenge might mean you aren’t able to be happy. Gifted children, for instance, develop learned helplessness in schooling environments that go at a much slower pace than they do. I am not satisfied by games—I couldn’t just zoom around on my wheels in patterns and be amused. I am not a gnat, I’m a human being and I need fulfillment. Boredom is a formidable affliction which I don’t dare underestimate.
I think I have to classify the R2D2 body as life support, and say pull the plug or put me back in cryo. I’d rather not just wheel around in little circles while my brain tortures me because of boredom. No R2D2 body.
Good try though.
I’d rate the R2D2 much lower than 5%, at least as far as your conscious experience goes. Your brain might technically be kept in a vault or canister somewhere, but there would be extremely good virtual reality linkups to the brain. Look how good movies are getting with current VR. They have to simulate physics and human anatomy in considerable detail, but often take shortcuts to make the characters cuter and sexier. This is much more likely to be what you have to look forward to. Weirder than you’re used to, but much more appealing than you are thinking here. And that’s all just talking about a possible non-uploaded existence as a meat-brain. If you were to be uploaded, the possibility of being limited in your communication to your environment is even lower.
Even if you were stuck in an R2D2 body or something for years on end with no high-end virtual reality, it is doubtful that you would experience boredom or depression. Boredom and depression is an emotional state with particular neurological characteristics. These can be disrupted (even now) by drugs. Furthermore, it seems likely that boredom is dependent on hormonal and/or electrical responses from the rest of the body. A brain by itself probably could not feel boredom without significant prosthetic assistance.
The very notion of existing as a brain in a can means we’ve solved the problem of figuring out how to synthesize and deliver every chemical and stimulus the brain depends on. The delivery mechanism would be digitally regulated, and thus we could feel excitement, boredom, or any other emotion on demand—perhaps even copying these sensations from healthy volunteers. That may not be an optimal human existence, but as an in-between state while waiting on life support to be restored to more optimal humanity it does not seem likely to be unbearable.
For a pop-culture example, take the Cybermen from Dr. Who. (Ridiculous show with ridiculous premises, just using it to make a point.) Their emotions are turned off, but only because their bodies are total pieces of junk that can’t support a brain with emotions. However we’ve seen that the emotions of the brain can in principle be turned back on again. Thus if you were to take away their tendency to be fanatical killing machines and replace it with something else (fanatical lab equipment manufacturers, say), since they can’t feel pain it wouldn’t be a bad thing to be a Cyberman for a few years while waiting to be transplanted into a non-stupid body.
I could wake up in the matrix… I don’t know if I’d want that. Even if it was designed to make me happy. I want meaning, this requires having access to reality. I’ll think about it.
Why would I want to do that? That is even worse. I am disgusted by the idea of having no ability to do anything of use, and even more disgusted by the idea that the solution to this situation is to drug me so that I can’t properly care about the problem. If I’m not able to interact with reality, what is the point in existing?
Three years, okay. But why bring me back at all then? Why not keep me frozen? If I can’t have quality of life, I would prefer that.
Does it? You can have other people in the simulation with you. People find a lot of meaning in companionship, even digitally mediated. People don’t think a conversation with your mother is meaningless because it happens over VOIP. You could have lots of places to explore. Works of art.. Things to learn. All meaningful things. You could play with the laws of physics. Find out what if feels like to turn gravity off one day and drift out of your apartment window.
If you wake up one morning in your house, go make a cup of coffee, breathe the fresh morning air, and go for a walk in the park, does it really matter if the park doesn’t really exist? How much of your actual enjoyment of the process derives from the knowledge that the park is ‘real’? It’s not something I normally even consider.
Why is reality important to me? Hmm. Because without access to reality, you always have to wonder what’s happening around you. Wouldn’t there come a point where you went HOLY CRAP someone could be sneaking up behind me right now and I’d never know.
Do you trust the outside world enough not to worry about that?
I don’t.
I’d eventually spill coffee on my computer or something and it would dawn on me “What if they spill coffee on my brain?”
I’d want to speak to the outside world. We’d probably be able to access them on the internet or some such. Things would be happening there. I would know about them. Political problems, disasters. Things I couldn’t get involved in.
And if not, then I’d be left to wonder. What’s going on in the outside world? Are things okay?
Imagine this: Imagine being cut off from the news. Not knowing what’s going on in the world.
Imagine realizing that you are asleep. Not knowing whether there’s a burglar in your house, whether it’s on fire. Not being able to wake up.
Imagine your friends all have the same problem. You have no access to reality, so there’s no way you can help them. If something affects them from the outside world, you can give them a hug. A virtual hug. But both of you knows that there’s nothing you can do.
With friendship, one of the things that creates bonds is knowing that if I’m in trouble at 3:00 am, I can call my friend. If all the problems are happening in a world that neither of you has access to, if you’re stuck inside a great big game where nothing can hurt you for real, what basis is there for friendship? What would companionship be good for?
You’ll be like a couple of children—helpless and living in a fantasy.
Why are you learning rationality if you don’t see value in influencing reality?
Well, there’s no reason to think you’d be completely isolated from top level reality. Internet access is very probable. Likely the ability to rent physical bodies. Make phone calls. That sort of thing. You could still get involved in most of the ways you do now. You could talk to people about it, get a job and donate money to various causes. Sign contracts, make legal arrangements to keep yourself safe. That sort of thing.
Wait, you only value friendship in so far as it directly aids you? I hate to be the bearer of bad news, but if that’s actually true, then you might be a sociopath.
Rationality is about maximizing your values. I happen to think that most of my values can be most effectively fulfilled in a virtual environment. If the majority of humanity winds up voluntarily living inside a comfortable, interesting, social, novel Matrix environment, I don’t think that’s a bad future. It would certainly solve the over-crowding problem, for quite a while at least.
Hmm. I hadn’t thought very much about blends of reality and virtual reality like that. I’ve encountered that idea but hadn’t really thought about it.
You took one example way too far. That wasn’t intended as an essay on my views of friendship. The words “one of the things that creates bonds” should have been a big hint that I think there’s more to friendship than that. Why did you suddenly start wondering if I’m a sociopath? That seems paranoid, or it suggests that I did something unexpected.
Okay, but the reason why rationality has a special ability to help you get more of what you want is because it puts you in touch with reality. Only when you’re in touch with reality can you understand it enough to make reality do things you want. In a simulation, you don’t need to know the rules of reality, or how to tell the difference between true and false. You can just press a button and make the sun revolve around the earth, turn off laws of physics like gravity, or cause all the calculators to do 1+1 = 3.
In a virtual world where you can get whatever you want by pressing a button, what value would rationality have?
You still need to figure out what you want.
Unless the virtual world is capable of figuring out what you want itself at least as well as you can. In which case bravo, press the button, you win.
Additionally, reality and virtual reality can get a lot fuzzier than that. If AR glasses become popular, and a protocol exists to swap information between them to allow more seamless AR content integration, you could grab all the feeds coming in from a given location, reconstruct them into a virtual environment, and insert yourself into that environment, which would update with the real world in real time. People wearing glasses could see you as though you were there, and vice versa. If you rented a telepresence robot, it would prevent people from walking through you, and allow you to manipulate objects, shake hands, that sort of thing. The robot would simply be replaced by a rendering of you in the glasses. Furthermore, you could step from that real environment seamlessly into an entirely artificial environment, and back again, and overlay virtual content onto the real world. I suspect that in the next twenty years, the line between reality and virtual reality is going to get really fuzzy, even for non-uploads.
Try doing that in World of Warcraft, and you’ll find your account canceled.
Well, then there’s your answer to the question ‘what is friendship good for’ - whatever other value you place on friendship that makes you neurotypical. I was just trying to point out that that line of reasoning was silly.
Well, you have to get to that point, for starters. And, yes, you do need some level of involvement with top-level reality. To pay for your server space, if nothing else. Virtual environments permit a big subset of life (play, communication, learning, etc. much more efficiently than real life), with a few of the really horrifying sharp edges rounded off, and some additional possibilities added.
There are still challenges to that sort of living, both those imposed by yourself, and those imposed by ideas you encounter and by your interactions with other people. Rationality still has value, for overcoming these sorts of obstacles, even if you’re not in imminent danger of dying all the time.
You’re only expressing personal preferences, but I feel enormously uneasy to hear you say “Human beings need fulfillment, therefore I’d rather die than be like a paraplegic with wheels”. People who can’t speak, are fed through tubes, get around on wheels, express emotion in nonstandard ways, lack functioning hands, and can’t have most forms of sex, don’t usually want to die, but when they’re murdered by an “angel of mercy” serial killer you get people saying stuff like
Ken Wood, ex-husband of one of the Grand Rapids killers
You might be a very atypical person who’d prefer death to severe disability, but if you are, could you pepper statements like that with disclaimers? That’s kind of a dangerous meme to reinforce.
If they want to live, I have no problem with it. I am not advocating killing them. I realize this is my personal preference. Feel better now?
I don’t know what kind of disclaimer I would even add. “Don’t become a serial killer because I said this?”
And I question whether it really is uncommon for people to choose death over severe disability. Why do so many people have living wills?
I don’t think this is dangerous. What’s dangerous is if the person doesn’t realize that not everyone shares their personal preference.
This idea that we need to censor ourselves when having honest discussions is a meme I would not like to see reinforced. I would propose to work against this meme by arguing emotionally and rationally against it rather than by trying to censor it.
Your values are leaking all over your statements of fact. It is not plausible to me that you have not seen the idea of preferring death to severe disability in lots of places at this point in your rational career. From this I conclude your describing those who feel that way as “very atypical” is not only false, but badly motivated as well.
On the (in my estimation) extremely small chance that you really don’t know what a common idea preferring death to severe disability is, google “living will,” “kervorkian” “suicide law oregon” to get a jump start into the large world of people who discuss a myriad of versions and implications of this pretty common meme.
Except when they do.
Tony Nicklinson’s case is by no means the only one I’ve heard of. How do you know that these people are “very atypical” of the severely disabled?
Of course, the idea does lend itself to rationalisations, and according to this blog post, Ken Wood, who you quoted, is doing exactly that:
Nerd alert: R2D2 was able to talk with C3P0. Presumably under normal circumstances, there would be a robot culture. This doesn’t address whether such a life would be satisfying for someone who was born human.
I realize R2D2 could communicate to C3P0, however I would not qualify that as “being able to speak”. Needing an interpreter would leave me disabled in any situation where the interpreter was not present. Communicating in beeps is a disability, not an ability.
Are you claiming to be indifferent to death?
That’s a good question. I’m not exactly indifferent. I experienced a major illness where I not only learned what it was like to suffer so much that I understood that there were things worse than death, but had to face the possibility of death and make peace with it. If you haven’t experienced something that caused it to sink in that there are experiences worse than non-existence, you’ll probably be running on the assumption that living is an opportunity for enjoyment. This is biased. Life is also an opportunity for suffering.
And if you haven’t faced death—I mean really faced it, felt like you were going to die, you probably wouldn’t feel that there was anything gained from making peace with it. This is pretty easy to understand if you consider that thinking about death is really upsetting and if you’re sick enough, you’ll be kind of motivated to think about it constantly, which is not particularly useful and it’s definitely not pleasant. At the point where you realize “Gee I’m thinking about this constantly and it isn’t pleasant or useful.” you realize the utility in making peace with death.
I haven’t completely lost interest in life or anything. I have some very strong reasons to be here. But death itself just doesn’t provoke the same terror it once did. I think what I mean by “peace” is not that I am indifferent—I do have a preference—it’s that it’s not provoking the same terror that it used to.
On making peace with death:
It’s usually a good idea in the short term to make peace with what you can’t change, but when it turns out you can change it, it sort of bites you in the ass. This is true of all forms of learned helplessness, not just accepting death. See what people do to cope with abuse: enormous gain while the abuse lasts, enormous handicap for getting back to life.
On life:
Usual phrasings treat life as neutral and death as insanely bad. I think more of death as neutral and life as insanely good. (Utility is relative, so it makes no difference.) It’s not always (or even often) pleasant and enjoyable, but it’s always interesting. That’s my main problem with pain: it’s bad that it hurts, but it’s worse that it fills your mind and won’t let you focus on something new. Obviously some lives are worse than death (torture, long-term sensory deprivation) and some are better (cake, books). What I’m trying to get at is that “neutral” in terms of pleasure and pain isn’t “neutral” in terms of existence.
Life is full of things; taking in everything about even a tiny detail of a perfectly ordinary object is enough to send you into sensory overload, even before you abstract away curves and colors to categorize the pattern as a single solid object with a given shape, recognize this particular object it as a tin and start getting curious about what it contains and how it’s made and why light reflects off metal that way and a thousand other things about this tin and your model of tins. I don’t spend all my waking hours in childlike wonder over everything, though I can whenever I want if I’m not feeling horrible, but I constantly get tiny slices of novelty. That’s why I value life so highly; the cake is just icing.
(All this sounded a lot less confused in my head.)
Absence of terror is not biting me in the ass. I am so much stronger than I used to be. I came out of that illness in a state of bliss like I’ve never felt before—and I still feel it. It isn’t just because I’m healthy, it’s also because I learned so many tricks to reduce my stress. Such as not feeling terrified of death.
You are confusing lack of fear with learned helplessness. I didn’t say that I let go of control. I said that I stopped feeling terrified. You’re confusing what I said for something else. Ask yourself this: Does feeling helpless do anything to stop your terror? No. So why would it stop mine? That is not the method by which I learned not to be terrified.
You are also confusing “making peace with death” for “accepting death”. Obviously, I don’t accept it—otherwise, why would I have made this thread?
Please try and interpret what I am actually saying.
I see them both as neutral, but I have a wish to make a difference in the world that burns and drives me to live, and I want to experience interacting with others like me (for reasons I don’t totally understand—it is probably some kind of social instinct). For these reasons, I want to live. However, I separate my wishes from my view of whether life and death are good and bad—for the same reason I separate desire from reality. Just because I want something out of life, doesn’t mean that life will give it to me. I could get quite the opposite. Therefore, it doesn’t make sense to me to see life as good or bad. Life is an opportunity for both enjoyment and suffering, and you never know which one you are going to receive next.
You have never been bored?
Also, have you considered that a life full of meaningless pleasure, or nonconstructive senses of wonder will not be fulfilling? It sounds like you’ve never been through anything horrible enough that the possibility for deep and prolonged misery feels real to you. You are likely to be experiencing normalcy bias.
Okay, then I’m mistaken about what you mean by “peace with death”. What I thought it meant is “GAH I’M GOING TO DIE!! …ehn, it wouldn’t be so bad. At least all this crap would be over. And it’s easier to just let it happen than do it myself. I just hope it doesn’t take too long.”. Obviously this isn’t what you’re getting at. So… you would have signed up for cryo to avoid both death and the fear of death, but just avoiding death isn’t good enough, because death is only bad if life is good, which it might not be. Is that right?
If you don’t see death as inherently worse than life, I don’t think I can convince you to sign up for cryo! (Well, any future in which you get revived is more likely to get you a good life than an inescapable bad one. And you could always ask Mike Darwin if you can state conditions for revival. But still, if you like anchovy and I like pineapple, I can’t convince you to order the Hawaiian pizza.)
I could point to people in awful situations who get an overwhelming drive to survive. The archetypal example would be Saw, which is about people who don’t like life all that much forced to do very painful things to survive, thus revealing a preference for life. (It’s a terrible example, because it’s fictional and the characters have good things to return to, not just life. But you get the point.) But I don’t have stats on how many switch to survivor mode and how many just sort of give up or get suicidal, and even if most people did, you could just say “So? Many people are like you. I’m not.”
Well… I get bored when I can’t focus on the shiny, because there’s something I can’t block out (noise, pain, a droning teacher) or because I don’t have enough room in my brain (and any writing material I might have) to comprehend the shiny. I also get bored when I can’t find any new things, because there’s nothing to prompt me to think about a new question (my trick was to start thinking about the psychology of boredom, but that’s exhausted by now).
Nonconstructive? Where do you think physicists come from?
More seriously, it wouldn’t be very fulfilling, but I prefer feeling nothing but pleasure to feeling nothing at all.
To steelman your argument, I might not remember now what it really felt like, and thus have lost any aliefs I acquired then. I distinctly remember thinking “I’m gonna eat up this plate of shit and demand seconds”, but even that wasn’t at the worst of times.
I try to feed “not existing” into my brain’s utility evaluation module (a.k.a. the “how would I feel if this happened” test) and all it returns is confusion. On the pleasure-pain hedonism scale, not existing doesn’t evaluate to zero, it evaluates to “syntax error”. I can easily calculate that my sudden death would make the world a worse place, but I can’t figure out if I should prefer a world in which my mom had a genetically different child (who would then grow up to be a person that isn’t “me”) to one in which “I” exist.
Of all the possible worlds, why should I prefer those in which “I” came into existence to those in which someone else existed instead? Similarly, why should I prefer a distant future in which I’m resurrected from cryonic suspension to one in which I’m not?
Agree that the utility of death is undefined on the hedonic scale. Still gotta measure it somehow.
This is not similar! The you algorithm is currently embodied and running. Making it stop running forever, whether by dropping a piano on your head or by neglecting to thaw you, kills you. I don’t want people to die, and I don’t think you do either.
I am indifferent between various people being born, and I think indifferent to how many are born, except insofar as they will lead good or bad lives. You don’t seem to be a very happy person, so I wish you’d never been born. (Zing.) But we can’t unbirth you, and clever tricks like pretending you already die and we have an opportunity to birth you again won’t help.
I’m not so sure; “you’ve already died and we have an opportunity to birth you again” isn’t very relevant to the question of whether one should commit suicide or not, but it does seem, to me, to be exactly what cryonics is offering.
It seems like most of the external effects of my death happen regardless of whether I’m revived from cryonic suspension or not. Suppose that a piano is about to fall on my head, but at the last minute, a wormhole opens up beneath me, and I end up in the middle of the Delta Quadrant surrounded by friendly, English-speaking aliens. ;) Now, in this (silly) scenario, I happen to be alive and well, but everyone else saw me get flattened by a falling piano and thinks I’m dead. As far as its effect on the rest of the Earth is concerned, this is basically just as bad as if the piano actually did hit me: my family and friends will still grieve, etc. And since all I get is confusion when I ask myself if it is better for me if I exist or not, I don’t know if I have a reason to prefer “piano + wormhole” to “piano + splat”. (Ignore the effect my presence will have on the aliens.)
I prefer you not to die even if I don’t know about it. I’m allowed to have preferences about events I can’t observe and there’s nothing you can do about it, so there.
Also, wouldn’t people you care about we happier hoping you’ll make it to the future than knowing you’re dead and gone? Some of them might even be around when you get thawed.
I told my 14 year old daughter about cryo, she was amazed, incredulous. She said something like “those people don’t believe in life after death?” I said “no, do you?” She said she did.
I realized that there would be a case that if there is a life after death that cryo would interfere with that.
I think there are a lot of reasons I don’t buy in to cryo. But one of them is that I think the extremely small chance of successful and happy revival is at a similar level with the extremely small chance that there is some sort of “next step” for us after death. If the people buying in to cryo are making a sort-of Pascal’s wager with death, I feel like I’m the guy saying “but what if god is buddha? What if god is Islamic?”
When it comes down to it, my estimate is cryo is 99.999+% likely to be meaningless, epsilon likely to result in a happy revival and epsilon likely to screw up my afterlife.
I’m a neurotypical straight male, but I suspect my reaction to cryo is similar to the caricature of female reactions. That’s my intuition anyway.
Really? I wouldn’t put odds of revival for best-case practices any lower than maybe 10%. How on earth do you have such a high confidence that WBE emulation won’t be perfected in the next couple of hundred years?
I put the odds that we will have nanobots in our bloodstream killing cancer cells and regulating our chemistry to avoid a lot of metabolic problems, repair injuries, and so on, at a pretty high number. I put the odds that we will figure out how to put a living human into some sort of suspended animation and bring them back into regular animation at some sort of reasonable odds. I put the odds that if we did our best effort to freeze a living person now without damage that we would be able to eventually revive them at maybe 10%. The odds that we will be able to revive a person frozen or otherwise preserved after they are legally dead, that’s getting down towards time-machine to the past odds, since I think you are freezing after important parts of the information are lost.
Conditioned on having the technical ability to revive the frozen, that might raise the odds of eventually being revived towards 10%. There are a lot of things that might keep revival from happening other than it not being possible technically.
If you’re talking about people frozen after four plus hours of room temperature ischemia, I’d agree with you that the odds are not good. However, somebody with a standby team, perfused before ischemic clotting can set in and vitrified quickly, has a very good chance in my book. We’ve done SEM imaging of optimally vitrified dead tissue, and the structural preservation is extremely good. You can go in and count the pores on a dendrite. There simply isn’t much information lost immediately after death, especially if you get the head in ice water quickly.
I also have quite a high confidence that we’ll be seeing WBE technology in the next forty years (I’d wager at better than even odds that we’ll see it in the next twenty). The component technologies already exist (and need only iterative improvements), and many of them are falling exponentially in cost. That combined with what I suspect will be a rather high demand when the potential reaches the public consciousness, is a pretty potent combination of forces.
So, for me, I lose most of my probability mass to the idea that, if you’re vitrified now, something will happen to Alcor within 40 years, or, more generally, some civilization-disrupting event will occur in the same time frame. That your brain isn’t preserved (under optimal conditions), or that we’ll never figure out how to slice up and emulate a brain, are not serious points of concern to me.
Women in recent decades have clamored to get into social spaces traditionally dominated by men and associated with male power and privilege, because these women want to raise their own status to male levels. For example, women want to get on Facebook’s board of directors. By contrast, notice women’s lack of interest in becoming guards in men’s prisons.
Cryonics has a reputation (wrongly) as a rich white man’s social space, so why haven’t women wanted to colonize the cryonics community for status reasons? (For that matter, why haven’t we heard calls for more “diversity” and “vibrancy” in the cryonics movement from minorities’ spokespeople?) Instead cryonics acts like “female Kryptonite” much of the time.
You can go to Mike Darwin’s and the de Wolfs’ essay about cryonics hostile-wives to get about as much insight into the problem as I’ve read, but I don’t see any obvious way to turn this around so that the cryonics movement becomes more women-friendly. I’ve wondered if we can find a good model by studying new American religious movements in the 19th Century which attracted many women as early adopters, like Mormonism, Seventh Day Adventism and Christian Science. Women even played roles in founding Adventism and Christian Science (Ellen White and Mary Baker Eddy, respectively), which makes their examples even more interesting because Western culture has traditionally not accepted women as religious authority figures.
The cryonics idea also has a fiction problem: Three novels I know of which portray cryonics or suspended animation positively all show a man who takes advantage of an underage girl as part of his plan for self-fulfillment, while disregarding the possibility that the girl upon reaching her majority might have other plans for her life. Will McIntosh’s story “Bridesicle” in Asimov’s magazine a few years ago shows an even more repulsive exploitation of women involving cryonics.
In other words, the cryonics movement would probably benefit by disavowing those kinds of stories and replacing them with ones which treat the womenfolk better.