What’s the difference between making friends now and making friends after you wake up? What’s the difference between making a family now, and making a new family then? (here I’m referencing both this comment about finding new friends, and your comment in the other thread about starting a new family)
If a friendly singularity happens, I think it’s likely that the desire of extroverts like you for companionship and close relationship will have been taken into account along the way and that forming these bonds will still be possible.
Of course right now I’d want to be with my current fiancé, and I’m planning to try to convince her to sign up for cryonics, but if I lost her, I’d still rather live and have to figure out another way to get companionship in the far future than to die.
First of all, my friends aren’t interchangeable. It’s already a big step for me to be willing to make a presorted cryonics-friendly friend as a substitute for getting my entire existing cohort of companions on board, or even just one. Second of all, waiting until after revival introduces another chain of “ifs”—particularly dreadful ifs—into what’s already a long, tenuous chain of ifs.
Of course they aren’t. I’m just saying that I’d prefer making new friends to death, and that despite the fact that I love my friends very much, there’s nothing that says that they are the “best friends I can ever make” and that anybody else can only provide an inferior relationship.
Second of all, waiting until after revival introduces another chain of “ifs”—particularly dreadful ifs—into what’s already a long, tenuous chain of ifs.
Once again, between the certitude of death and the possibility of life in a post-friendly-singularity world, I’ll take the “ifs” even if it means doing hard things like re-building a social circle (not something easy for me).
I’m just having a really hard time imagining myself making the decision to die because I lost someone (or even everyone). In fact, I just lost my uncle (brain cancer), and I loved him dearly, he was like a second father to me. His death just made me feel even more strongly that I want to live.
But I suppose we could be at opposite ends of the spectrum when it comes to these kinds of things.
I guess I’m just more dependent on ready access to deeply connected others than you? This sounds like a matter of preferences, not a matter of correctly turning those preferences into plans.
If you need friends post suspension you can pay for my suspension (currently my budget goes to X-risk) and I will promise to spend a total of at least one subjective current human lifetime sincerely trying to be the best friend I can for you unless the revived get a total of less than 100 subjective human lifetimes of run-time in which case I will give you 1% of my total run-time instead. If that’s not enough, you can also share your run-time with me. I will even grant you the right to modify my reward centers to directly make me like you in any copy running on run time you give me. This offer doesn’t allow your volition to replace mine in any other respect if the issue is important.
You’re on. Alicorn, would you be so kind as to arbitrate? We need you to evaluate which of these three categories Michael’s offer fits in to:
Deeply Disturbing
Helpful
Just ‘somewhat’ disturbing all the way through to indifference.
Would ‘slightly amusing’ count as helpful if it served to create slightly more confidence in the prospect of actively seeking out the friendship the potentially cryonically inclined?
Yep, disturbing. “Deeply” might be pushing it a little. But a) I’ll have to mess with my budget to afford one suspension, let alone two, and while I’d chip in for my sister if she’d let me, people I do not yet know and love are not extended the same disposition. b) There’s presently no way to enforce such a promise. c) Even if there were, that kind of enforcement would itself be creepy, since my ethics would ordinarily oblige me to abide by any later change of mind. d) This arrangement does nothing to ensure that I will enjoy MichaelVassar’s company; I’m sure he’s a great person, but there are plenty of great people I just don’t click with. e) I do not like the idea of friendships with built-in time quotas, I mean, ew.
You’re forgetting the part where they revive you only when there is a cure for whatever you died from. You may be revived long before or after they are revived. And if that happens, there’s also the chance they can die again and not be stored before you’re revived. You’d probably have to give instructions to hold off on revival, otherwise, risk the missed connection.
I’m in the process of signing up (yeah, I know, they’re all saying that… But I really am! and plan to post about my experience on LW once it’s all over) and I’ll be your friend too, if you’ll have me as a friend.
I’m working on it. Is taking a bit longer than planned because insurance company seemed to throw a few extra hoops for me to jump through. (including some stuff from some samples they took from me that they don’t like. Need to see a doc and have them look at the data and pass judgement on it for the insurance company). Hence need to make doc appointment.
Given the opening post I am not sure I understand what you are saying. What about being resurrected with the people described would be an Extrovert Hell? That you don’t have any pre revival friends?
I’m referencing a prior thread. Pre-revival friends or family are a prerequisite for me not looking at the prospect of revival with dread instead of hope.
With those values the ‘find friends who are signed up to cryonics’ sounds like the obvious plan. (Well, less obvious than the one where you kidnap your friends, cut of their head and preserve it against their will. But more sane.)
I bet that online dating and friend making will work a lot better in the future. Can you elaborate about what is so dreadful about waking up without knowing anyone?
I’m referencing a prior thread. Pre-revival friends or family are a prerequisite for me not looking at the prospect of revival with dread instead of hope.
You know what? This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply.
Okay, 1) I dislike the “shut up and multiply” sentiment anyway, since it’s so distinctly consequentialist. I will not shut up, and I will only multiply when everything I’m multiplying is really commensurate including in a deontic sense. I will walk away from Omelas should I have occasion. And 2) it’s my freakin’ life. I’m not deciding to deny someone else the chance to be ferried to the future on the basis of it sounding lonely.
Is there some other significance to the links and quote that you hoped I’d extract?
Is there some other significance to the links and quote that you hoped I’d extract?
The significant claim seems to be that it is often necessary to quell an instinctive reaction in order to best meet your own preferences. There are some reflectively consistent preferences systems in which it is better to die than to suffer the distress of a lonely revival but there are many more that are not. I take Vladmir’s suggestion to be “make sure this is what you really want, not just akrasia magnified a thousand times”.
And 2) it’s my freakin’ life. I’m not deciding to deny someone else the chance to be ferried to the future on the basis of it sounding lonely.
Often claims of the shape of Vladimir’s are intended to enforce a norm upon the recipient. In this case the implied ‘should’ is of the kind “action X may best give Y what they want” which is at least slightly less objectionable.
I did a reversal test on the preference; if everybody I cared about disappeared from my life all at once and everybody who remained was as alien as the people of the future will likely be, I would probably want to die, no cryonics required.
I bet that online dating and friend making will work a lot better in the future. There probably exist many people in the future that appreciate your unique knowledge and want to get to know you better.
When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future.
Can you talk more about why your future is so dreadful? Stating that all possible futures are worse than death is a strong statement. In this reversal test, it even assigns a “probably” to being suicidal. I think your flaw in reasoning lies there. I don’t think that being “probably” suicidal in the future is sufficient reason to not visit the future.
In our time, we morally justify the forcible hospitalization and medication of suicidal people until they aren’t suicidal anymore. With Friendly AI, this moral justification may remain true in the future, and once you’re on drugs or other brain enhancements, you’ll probably love life and think your self from your first life absolutely insane for preferring death to glorious existence. Again, I think your desire for deep connections with other people is likely to be nearly immediately fixable in the future. This does sound a little dystopian, but I don’t think there exist very many wake-up futures in which your existential misery can not be fixed.
To me, it seems like in nearly all cases it is worth waiting until the future to decide whether or not it is worth living.
“When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future.”
Woman: You’re from 1999? I’m from 2029! Say, remember when we got invaded by the cybernetic ape army?
In our time, we morally justify the forcible hospitalization and medication of suicidal people until they aren’t suicidal anymore. With Friendly AI, this moral justification may remain true in the future, and once you’re on drugs or other brain enhancements, you’ll probably love life and think your self from your first life absolutely insane for preferring death to glorious existence. Again, I think your desire for deep connections with other people is likely to be nearly immediately fixable in the future. This does sound a little dystopian, but I don’t think there exist very many wake-up futures in which your existential misery can not be fixed.
Hmm… At least the content of my position seems to have been rehashed a lot, even if you won’t agree with it.
I believe that your opinion about what your values are has very little influence on what your values actually are, which in the backbone are human-universal values plus a lot of person-specific detail that is so below the level of conscious understanding that isn’t even worth speculating about. Whenever someone states an opinion about their values being extreme, they are seriously wrong about their actual values. Consequently, acting on the misconstrued values is against the person’s own actual values.
I don’t grant nearly as much credence to the idea that there are human-universal values as most people around here seem to. People are a wacky, diverse bunch.
Also, if you have an idea about what my values Really Are that is unconnected to what I tell you about them, I don’t want you anywhere near any decisions about my life. Back! Back! The power of my value of self-determination compels you!
Also, if you have an idea about what my values Really Are that is unconnected to what I tell you about them, I don’t want you anywhere near any decisions about my life.
I get my ideas about what people’s values Really Are based on their decisions. How much weight I place on what they tell me about their values varies based on their behaviour and what they say. I don’t make it my business to be anywhere near any decisions about other people’s lives except to the extent that they could impact me and I need to protect my interests.
I don’t grant nearly as much credence to the idea that there are human-universal values as most people around here seem to. People are a wacky, diverse bunch.
That assumption (and presumption!) of human-universal values scares me at times. It triggers my instinctive “if you actually had the power to act on that belief I would have to kill you” instinct.
Even with that kind of ruthless self-determination in mind it is true that “acting on the misconstrued values is against the person’s own actual values”. Vladmir’s point is not particularly controversial, whether it applies to you or not is for you to decide and Vladmir to speculate on if he happens to be curious.
Absolutely. And I weigh that information higher coming from yourself than from many people given my observations of apparent self awareness and maturity somewhat beyond what I expect given your self reported age. Obviously such judgements also vary based on topic and context.
In general, however, my life has been a lot simpler and more successful since realising what people say about their values is not always a reliable indicator.
Back! Back! The power of my value of self-determination compels you!
Friendly AI be the judge (I’m working on that). :-)
By the way, this reminds of Not Taking Over the World (the world is mad and is afraid of getting saved, or course, in the hypothetical scenario where the idea gets taken seriously to begin with!).
I don’t recall hearing that kind of an argument presented here anywhere. Yes, there have been arguments about your values shifting when you happen to achieve power, as well as seemingly altruistic behavior actually working to promote individual fitness. But I don’t think anybody has yet claimed that whenever somebody feels they have extreme values, they are wrong about them.
Furthermore—if the discussion in those referenced posts is the one you’re referring to—I’d be hesitant to claim that the consciously held values are false values. People might actually end up acting on the non-conscious values more than they do on the conscious ones, but that’s no grounds for simply saying “your declared values are false and not worth attention”. If you went down that route, you might as well start saying that since all ethics is rationalization anyway, then any consequentialist arguments that didn’t aim at promoting the maximum fitness of your genes were irrelevant. Not to mention that I would be very, very skeptical of any attempts to claim you knew someone else’s values better than they did.
I’m not arguing for the supremacy of non-conscious values: in many cases, people have good sense of their actual values and consciously resolve their implications, which is what I see as the topic of Which Parts Are “Me”?. The inborn values are not a fixed form, although they are a fixed seed, and their contradictions need to be resolved.
If you went down that route, you might as well start saying that since all ethics is rationalization anyway, then any consequentialist arguments that didn’t aim at promoting the maximum fitness of your genes were irrelevant.
Human universal (we all share the bulk of our values),
Complexity of value (there is a lot of stuff coded in the inborn values; one can’t explain away huge chunks of this complexity by asserting them not present in one’s particular values),
Fake simplicity (it’s easy to find simple arguments that gloss over a complex phenomenon),
No, Really, I’ve Deceived Myself (it’s not a given that one even appreciates the connection of the belief with the asserted content of that belief)
These obviously don’t form a consistent argument, but may give an idea of where I’m coming from. I’m only declining to believe particularly outrageous claims, where I assume the claims being made because of error and not because of the connection to reality; where the claims are not outrageous, they might well indicate the particular ways in which the person’s values deviate from the typical.
I suspect this community overemphasizes the extent to which human universals are applicable to individuals (as opposed to cultures), and underemphasizes individual variation. I should probably write a post regarding this at some point.
Well put. My own uncertainty with regard to my values is the main reason I’m reluctant to take “mind hacks” out for casual spins—I’ve been quite surprised in the past by how sophisticated subconscious reactions can be. That said, I don’t think I could bring myself to ignore my consciously-held values to the point of doing something as significant as signing up for cryonics, were that necessary.
I thought “I’m so embarrassed I could die” was just a figure of speech.
You weren’t convinced by Eliezer’s post? Do you think signing up for cryonics will get you ostracized from your social circles? Besides the two witnesses on some of the forms, nobody will know unless you tell them or flaunt your ID tags. Are there no two people who you are willing to trust with a secret?
...This has nothing to do with embarrassment. The problem isn’t that people will stop being my friend over it, the problem is that they will all die and then the best case scenario will be that I will wake up in a bright new future completely alone.
I’m actually still confused. That doesn’t sound like ‘Extrovert Hell’. Extroverts would just make a ton of new friends straight away. A lone introvert would have more trouble. Sure, it would be an Extrovert Very Distressing Two Weeks, but death is like that. (Adjust ‘two weeks’ to anything up to a decade depending on how vulnerable to depression you believe you will be after you are revived.)
I honestly do not think I’d last two weeks. If I go five conscious hours without having a substantial conversation with somebody I care about, I feel like I got hit by a brick wall. I’m pretty sure I only survived my teens because I had a pesky sister who prevented me from spending too long in psychologically self-destructive seclusion.
This sounds like an unrealistically huge discount rate. To be precise, you anticipate:
(a) One week of being really unhappy while you go through the process of making new friends (perhaps with someone else who’s really unhappy for similar reasons). I assume here that you do not find the process of “making a new friend” to be itself enjoyable enough to compensate. I also suspect that you would start getting over the psychological shock almost immediately, but let’s suppose it actually does take until you’ve made a friend deep enough to have intimate conversations with, and let’s suppose that this does take a whole week.
(b) N years of living happily ever after.
It’s really hard to see how the former observer-moments outweigh the latter observer-moments.
I think it’s this that commenters are probably trying to express when they wonder if you’re thinking in the mode we name “rational”: it seems more like a decision made by mentally fleeing from the sheer terror of imagining the worst possible instant of the worst possible scenario, than any choice made by weighing and balancing.
I also tend to think of cryonics as a prophylactic for freak occurrences rather than inevitable death of old age, meaning that if you sign up now and then have to get suspended in the next 10 years for some reason, I’d rate a pretty good chance that you wake up before all your friends are dead of old age. But that shouldn’t even be an issue. As soon as you weigh a week against N years, it looks pretty clear that you’re not making your decision around the most important stakes in the balance.
I know you don’t endorse consequentialism, but it seems to me that this is just exactly the sort of issue where careful verbal thinking really does help people in real life, a lot—when people make decisions by focusing on one stake that weighs huge in their thoughts but obviously isn’t the most important stake, where here the stakes are “how I (imagine) feeling in the very first instant of waking up” versus “how I feel for the rest of my entire second life”. Deontologist or not, I don’t see how you could argue that it would be a better world for everyone if we all made decisions that way. Once you point it out, it just seems like an obvious bias—for an expected utility maximizer, a formal bias; but obviously wrong even in an informal sense.
I think that the distress would itself inhibit me in my friend-making attempts. It is a skill that I have to apply, not a chemical reaction where if you put me in a room with a friendly stranger and stir, poof, friendship.
Um… would I deeply offend you if I suggested that, perhaps, your worst fears and nightmares are not 100% reflective of what would actually happen in reality? I mean, what you’re saying here is that if you wake up without friends, you’ll be so shocked and traumatized that you’ll never make any friends again ever, despite any future friend-finding or friend-making-prediction software that could potentially be brought to bear. You’re saying that your problem here is unsolvable in the long run by powers up to and including Friendly superintelligence and it just doesn’t seem like THAT LEVEL of difficulty. Or you’re saying that the short-run problem is so terrible, so agonizing, that no amount of future life and happiness can compensate for it, and once again it just doesn’t seem THAT BAD. And I’ve already talked about how pitting verbal thought against this sort of raw fear really is one of those places where rationality excels at actually improving our lives.
Are you sure this is your true rejection or is there something even worse waiting in the wings?
I’m making projections based on psychological facts about myself. Anticipating being friendless and alone makes me unhappy all by itself; but I do have some data on how I get when it actually happens. I don’t think I would be able to bring to bear these clever solutions if that happened (to the appropriate greater magnitude).
I do consider this a problem, so I am actively trying to arrange to have someone I’d find suitable signed up (in either direction would work for). This is probably a matter of time, since my top comment here did yield responses. I’d bet you money, if you like, that (barring financial disaster on my part) I’ll be signed up within the next two years.
I asked this elsewhere, but I’ll ask again: what if the unhappiness and distress caused by the lack of friends could suddenly just disappear? If you could voluntarily suppress it, or stop suppressing it? There will almost certainly be technology in a post-revival future to let you do that, and you could wake up with that ability already set up.
What about this: leave instructions with your body to not revive you until there is technology that would allow you to temporarily voluntarily suppress your isolation anxiety until you got adjusted to the new situation and made some friends.
If you don’t like how extraverted you are, you don’t have to put up with it after you get revived.
Would you be opposed to using it? Would you be opposed to not returning to consciousness until the technology had been set up for you (i.e. installed in your mind), so it would be immediately available?
I find that surprising. (I don’t mean to pass judgment at all. Values are values.) Would you call yourself a transhumanist? I wonder how many such people have creepy feelings about mind modifications like that. I would have thought it’s pretty small, but now I’m not sure. I wonder if reading certain fiction tends to change that attitude.
I would call myself a transhumanist, yes. Humans suck, let’s be something else—but I would want such changes to myself to be very carefully understood by me first, and if at all possible, directed from the inside. I mentioned elsewhere that I’d try cognitive exercises if someone proposed them. Brain surgery or drugs or equivalents, though, I am not open to without actually learning what the heck they’d entail (which would take more than the critical time period absent other unwelcome intervention), and these are the ones that seem captured by “technology”.
Hmm. What I had in mind isn’t something I would call brain surgery. It would be closer to a drug. My idea (pretty much an “outlook” from Egan’s Diaspora) is that your mind would be running in software, in a huge neuron simulator, and that the tech would simply inhibit the output of certain, targeted networks in your brain or enhance others. This would obviously be much more targeted than inert drugs could achieve. (I guess you might be able to achieve this in a physical brain with nanotech.)
I’m not sure if this changes your intuition any. Perhaps you would still be uncomfortable with it without understanding it first. But if you trust the people who would be reviving you to not torture and enslave you, you could conceivably leave enough detailed information about your preferences for you to trust them as a first-cut proxy on the mind modification decision. (Though that could easily be infeasible.) Or perhaps you could instruct them to extrapolate from your brain whether you would eventually approve of the modification, if the extrapolation wouldn’t create a sentient copy of you. (I’m not sure if that’s possible, but it might be.)
I trust the inhabitants of the future not to torture and enslave me. I don’t trust them not to be well-intentioned evil utilitarians who think nothing of overriding my instructions and preferences if that will make me happy. So I’d like to have the resources to be happy without anybody having to be evil to me.
But that wouldn’t be making you happy. It’d be making someone very much like you happy, but someone you wouldn’t have ever matured into. (You may still care that the latter person isn’t created, or not want to pay for cryonics just for the latter person to be created; that’s not the point.) I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you. Do you think they would prevent someone from committing suicide? If they would make unwanted modifications to you before reviving you, why wouldn’t they be willing to make modifications to unconsenting living people*? They would see your “do not revive unless...” instructions as a suicide note.
* Perhaps because they view you as a lower life form for which more paternalism is warranted than for normal transhuman.
Of course that’s not a strong argument. If you want to be that cautious, you can.
How about a scenario where they gave you something equivalent to a USB port, and the option to plug in an external, trivially removable module that gave you more conscious control over your emotional state but didn’t otherwise affect your emotions? That still involves brain surgery (to install the port), but it doesn’t really seem to be in the same category as current brain surgery at all.
Hmmm. That might work. However, the ability to conceptualize one way to achieve the necessary effect doesn’t guarantee that it’s ever going to be technically feasible. I can conceptualize various means of faster-than-light travel, too; it isn’t obliged to be physically possible.
I suspect I have a more complete and reality-connected model of how such a system might work than you have of ftl. :)
I’m basically positing a combination of more advanced biofeedback and non-pleasure-center-based wireheading, for the module: You plug it in, and it starts showing you readings for various systems, like biofeedback does, so that you can pinpoint what’s causing the problem on a physical level. Actually using the device would stimulate relevant brain-regions, or possibly regulate more body-based components of emotion like heart- and breathing-rate and muscle tension (via the brain regions that normally do that), or both.
I’m also assuming that there would be considerable protection against accidentally stimulating either the pleasure center or the wanting center, to preclude abuse, if they even make those regions stimulateable in the first place.
Of course I know how FTL works! It involves hyperspace! One gets there via hyperdrive! Then one can get from place to place hyper-fast! It’s all very hyper!
*ahem*
You have a point. But my more emotionally satisfying solution seems to be fairly promising. I’ll turn this over in my head more and it may serve as a fallback.
That seems like a fairly extreme outlier to me. I’m an extrovert, and for me that appears to mean simply that I prefer activities in which I interact with people to activities where I don’t interact with people.
Sounds like “five hours” might be something worth the pain of practicing to extend. Maybe not for you, but outlier time-brittle properties like that in me worry me.
Refraining from pushing the five hour limit harder than I have to is a very important part of my mood maintenance, which lets me not be on drugs, in danger of hurting myself, or just plain unhappy all the time. The farther I let myself get, the harder it is to muster the motivation to use my recovery strategies, and the longer they take to work.
From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.
I love having a good conversation with a friend. But I could also probably go for weeks without having such a thing. Probably the longest I’ve been alone is a week and I enjoyed it.
I can’t see from your viewpoint, but from my viewpoint you should do everything in your power to change how reliant you are on others. It seems like if you are so reliant on others that you are going to, consciously or not, change your values and beliefs merely to ensure that you have people who you can associate with.
I’m dependent on many things, and the ability to chat with people is one of the easiest to ensure among them. If I decide that I’m too dependent on external factors, I think I’ll kick the brie habit before I try to make my friends unnecessary.
I’m not sure whence your concern that I’ll change my values and beliefs to ensure that I have people I can associate with. I’d consider it really valuable evidence that something was wrong with my values and beliefs if nobody would speak to me because of them. That’s not the case—I have plenty of friends and little trouble making more when the opportunity presents itself—so I’m not sure why my beliefs and values might need to shift to ensure my supply.
Perhaps I misunderstood what your “dependency” actually is. If your dependency was that you really need people to approve of you (a classic dependency and the one I apparently wrongly assumed), then it seems like your psyche is going to be vastly molded by those around you.
If your dependency is one of human contact, than the pressure to conform would probably me much less of a thing to worry about.
I would like to address your first paragraph...”making your friends unnecessary” isn’t what I suggested. What I had in mind was making them not so necessary that you have to have contact with them every few hours.
Anyway, it’s all academic now, because if you don’t think it’s a problem, I certainly don’t think it’s a problem.
ETA: I did want to point out that I have changed over time. During my teenage years I was constantly trying to be popular and get others to like me. Now, I’m completely comfortable with being alone and others thinking I’m wrong or weird.
From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.
If you cannot so imagine then perhaps making judgements on what is ‘unhealthy’ for a person that does rely so acutely on others may not be entirely reliable. If someone clearly has a different neurological makeup it can be objectionable to either say they should act as you do or that they should have a different neurological makeup.
It is absolutely fascinating to me to see the ‘be more like me’ come from the less extroverted to the extrovert.
It is absolutely fascinating to me to see the ‘be more like me’ come from the less extroverted to the extrovert.
Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill. The fact that I need contact has made me, through sheer desperation and resulting time devoted to practice, okay at getting contact; but that’s something that was forced, not enabled, by my being an extrovert.
Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill.
Definitely. It could get you killed. It had me wondering, for example, if the ~5 hours figure is highly context dependent: You are on a hike with a friend and 12 hours from civilisation. Your friend breaks a leg. He is ok, but unable to move far and in need of medical attention. You need to get help. Does the fact that every step you take is bound up in your dear friend’s very survival help at all? Or is the brain like “No! Heroic symbolic connection sucks. Gimme talking or physical intimacy now. 5 hours I say!”? (No offence meant by mentioning a quirk of your personality as a matter of speculative curiosity. I just know the context and nature of isolation does make a difference to me, even though it takes around 5 weeks for such isolation to cause noticeable degradation of my sanity.)
If it was my handicap I would be perfectly fine with an FAI capping any distress at, say, the level you have after 3 hours. Similarly, if I was someone who was unable to endure 5 consecutive hours of high stimulus social exposure without discombobulating I would want to have that weakness removed. But many people object to being told that their natural state is unhealthy or otherwise defective and in need of repair and I consider that objection a valid one.
I would certainly endure the discomfort involved in saving my friend in the scenario you describe. I’d do the same thing if saving my friend involved an uncomfortable but non-fatal period of time without, say, water, food, or sleep. That doesn’t mean my brain wouldn’t report on its displeasure with the deprivation while I did so.
water ~ few days food ~ a few weeks sleep ~ a few days social contact ~ a handful of hours
Water depends on temperature, food on exertion both mental and physical. I speculate if the context influenced the rate of depletion in similar manner.
I very intentionally had qualifiers a-many in my comment to try and make it apparent that I wasn’t “judging” Alicorn. “I cannot imagine” is perhaps the wrong phrase. “I find it hard to imagine” would be better, I think.
Perhaps I’m crazy, but I don’t think pointing out the disadvantages of the way someone thinks/feels is or should be objectionable.
If someone differs from me in what kind of vegetables taste good, or if they like dry humor, or whatever, I’m not going to try and tell them they may want to rethink their position. There’s no salient disadvantages to those sort of things.
If Alicorn had said, “I really prefer human contact and I just get a little uncomfortable without it after 5 hours” I wouldn’t have even brought it up.
If someone has a trait that does have particular disadvantages, I just don’t see how discussing it with them is objectionable.
Perhaps the person to say whether it’s objectionable would be Alicorn. :)
I also think it’s extremely disproportionate to die because the old friends are gone. A post FAI world would be a Nice Enough Place that they will not even remotely mistreat you and you will not remotely regret your signing up.
If you’re talking about how I have no prior experience with revival, all I can say is that I have to make plans for the future based on what predictions (however poor) I can make now. If you’re talking about how I was born and that turned out okay, I have… y’know.. parents.
If you’re talking about how I was born and that turned out okay, I have… y’know.. parents.
For many people, parents are a neutral or net negative presence. But alright.
If you had to choose between being born to an orphanage and not being born—a situation which is symmetrical as far as I can see to your objection to cryonics—would you choose to not be born?
That depends on the circumstances which would have led to me being born to an orphanage. If somebody is going around creating people willy-nilly out of genetic material they found lying around, uh, no, please stop them, I’d be okay with not having been born. If I’m an accident and happened to have a pro-life mother in this hypothetical… well, the emphasis in pro-choice is “choice”, so in that case it depends whether someone would swoop in and prevent my birth against her will or whether she would change her mind. In the latter case, the abortion doctor has my blessing. In the former case, (s)he hasn’t, but only because I don’t think medically elective surgery should be performed on unwilling patients, not because I think the lives of accidental fetuses are particularly valuable. If I was conceived by a stable, loving, child-wanting couple and my hypothetical dad was hit by a bus during my gestation and my mom died in childbirth, then I’d be okay with being born as opposed to not being born.
Yeah. Even though a couple of them have expressed interest, there is a huge leap from being interested to actually signing up.
This is my present plan. We’ll see if it works.
I’m not willing to bet on this.
I do not want my brain messed with. If I expected to arrive in a future that would mess with my brain without my permission, I would not want to go there.
I have to say, if 3 fails, I would tend to downvote that future pretty strongly. We seem to have very different ideas of what a revival-world will and should look like, conditional on revival working at all.
I was including a “promptly enough” in the “will make friends” thing. I’m sure that, if I could stay alive and sane long enough, I’d make friends. I don’t think I could stay alive and sane and lonely long enough to make close enough friends without my brain being messed with (not okay) or me being forcibly prevented from offing myself (not fond of this either).
If your life were literally at stake and I were a Friendly AI, I bet I could wake you up next to someone who could become fast friends with you within five hours. It doesn’t seem like a weak link in the chain, let alone the weakest one.
It is the most terrifying link in the chain. Most of the other links, if they break, just look like a dead Alicorn, not a dead Alicorn who killed herself in a fit of devastating, miserable starvation for personal connection.
If you thought it was reasonably likely that, given the success of cryonics, you’d be obliged to live without something you’d presently feel suicidal without (I’m inclined to bring up your past analogy of sex and heroin fix here, but substitute whatever works for you), would you be so gung-ho?
I could sorta understand this if we were talking about one person you couldn’t live without, it’s the idea of worrying about not having any deep friends in general that’s making me blink.
Some people are convinced they’ll have to live without the strangest things after the Singularity… having encountered something possibly similar before, I do seriously wonder if you might be suffering from a general hope-in-the-future deficiency.
If you were the friendly AI and Alicorn failed to make a fast friend as predicted and that resulted in suicidal depression, would that depression be defined as mental illness and treated as such? Would recent wake-ups have the right to commit suicide? I think that’s an incredibly hard question so please don’t answer if you don’t want to.
Have you written anything on suicide in the metaethics sequence or elsewhere?
I suppose having to rigorously prove the mathematics behind these questions is why Eliezer is so much more pessimistic about the probability of AI killing us than I am.
I have only managed to live without particular persons who’ve departed from my life for any reason by virtue of already having other persons to console me.
That said, there are a handful of people whose loss would trouble me especially terribly, but I could survive it with someone else around to grieve with.
I would think that the corporate reviving you would be either a foundation of your family, a general charity organization or a fan club of yours (Don’t laugh! There are fan clubs for super stars in India. Extend it further in the future and each LW commentor might have a fan club.) Since you will be, relatively speaking, an early adopter of cryonics, you will be relatively, a late riser. Cryonics goes LIFO, if I understand it correctly.
I’m pretty sure now that your fears are explicitly stated in a public forum, they are on the record for almost all eternity and they will be given sufficient consideration by those reviving you.
Eliezer has already presented one solution. A make-do best friend who can be upgraded to sentience whenever need be.
A simpler solution will be a human child, holding your palm and saying “I’m your great great grand child”. Are you still sure you’ll still not care enough? (Dirty mind hack, I understand, but terribly easy to implement)
I’m pretty sure now that your fears are explicitly stated in a public forum, they are on the record for almost all eternity and they will be given sufficient consideration by those reviving you.
Probably worth backing up though, in the form of a stone tablet adjacent to your body.
Alcor do keep some of your stuff in a secret location, but given problems with data retrieval from old media it might be good if they offered an explicit service to store your data—which I’d expect them to defer to providers like Amazon, but handle the long-term problems of moving to new providers as the need arises, and of decryption only on revival.
I would take the “I’m your great great grandchild” solution in a heartbeat—but I do not already have children, and something could still come up to prevent me from having them (and hence great great grandchildren).
If you’d take that solution, why not a great great … great grand niece? Or distant cousin? Any human child of that time will be related to you at some remove.
My sister doesn’t have children yet either, and may or may not in the future. It does matter if they’re a relation I’d ever be disposed to see at Christmas, which has historically bottomed out with second cousins.
It does matter if they’re a relation I’d ever be disposed to see at Christmas
Then it looks like I misunderstood. Say you have a child, then get preserved (though no one else you know does). Then say you wake up, it’s 500 years in the future, and you meet your great (great … great) great grandchild, someone you would never have seen at Christmas otherwise. Would this satisfy you?
If so, then you don’t have to worry. You will have relatives alive when you’re revived. Even if they’re descendants of cousins or second cousins. And since it will be 500 years in the future, you are equally likely to see your cousin’s 2510 descendant and your 2510 descendant at Christmas (that is, not at all).
If I had a child, I’d sign up me and said child simultaneously—problem solved right there. There’s no need to postulate any additional descendants to fix my dilemma.
I can’t get enthusiastic about second cousins 30 times removed. I wouldn’t expect to have even as much in common with them as I have in common with my second cousins now (with whom I can at least swap reminisces about prior Christmases and various relatives when the situation calls for it).
I can’t guarantee it, no, but I can be reasonably sure—someone signed up from birth (with a parent) would not have the usual defense mechanisms blocking the idea.
Then why can you get enthusiastic about a great great grandchild born after you get frozen?
I can usually think about something enough and change my feelings about it through reason.
For example, if I thought “direct descent seems special”, I could think about all the different ideas like the questions Blueberry asks and change my actual emotions about the subject.
I suspect this comes from my guilty pleasure...I glee at biting-the-bullet.
If you want make friends with cryonicists, sign up. For every one person I meet who is signed up, I hear excuses from ten others: It won’t work. It will work but I could be revived and tortured by an evil AI. The freezing process could cause insanity. It’ll probably work but I’ve been too lazy to sign up. I’m so needy I’ll kill myself without friends. Etc.
Wow, calling me names has made me really inclined to take advice from you. I’ll get right on that, since you’re so insightful about my personal qualities and must know the best thing to do in this case, too.
Are you supposed to be the extrovert in the ‘extrovert hell’ scenario? Extroverts generally don’t have trouble finding new friends, or fear a situation where they find themselves surrounded by strangers.
I’m the extrovert, yes. In the sense of needing people, not in the sense of finding them easy to be around (I have a friend who finds it fantastically amusing to call herself a social introvert and me an antisocial extrovert, which is a fair enough description). I actually get very little value from interacting with strangers, especially in large groups. I need people who I’m reasonably close to in order to accomplish anything, and that takes some time to build up to. None of my strategies for making new friends will be present in a no-pre-reviv-friends-or-family wake-up scenario.
I actually get very little value from interacting with strangers, especially in large groups. I need people who I’m reasonably close to in order to accomplish anything
If the choice were available, would you change any of that?
I think that would depend heavily on the mechanism by which it’d be changed. I’d try cognitive exercises or something to adjust the value I get from strangers and large groups; I don’t want to be drugged.
I think of an extrovert as someone who recharges by being around other people, and an introvert as someone who recharges by being alone, regardless of social proclivity or ability.
“I make new friends easily” is one of the standard agree/disagree statements used to test for extraversion which is why I find this usage a little unusual.
No, it seems Alicorn’s usage of extrovert is valid. It is just not what I’d previously understood by the word. The ‘makes friends easily’ part of extrovert is the salient feature of extraversion for me.
It’s all on an introvert/extrovert test, but to me the salient feature of extroversion is finding interaction with others energizing and finding being alone draining. Introverts find it tiring to interact with others and they find being alone energizing, on a continuous spectrum.
I fall in the dead center on an introvert/extrovert test; I’m not sure how uncommon that is.
I’m pretty sure you will have friends and relatives living in 2070. Do you think it’ll be more than 60 years before cryonics patients are revived? Do you think it’ll be more than 60 years before we can reverse aging?
I’m still trying to convince my friends.
It’s still not working.
Maybe I’m doing it backwards. Who is already signed up and wants to be my friend?
I find it rather odd that no one has answered the original question.
I’m signed up, and I’ll be your friend.
This made me smile. : )
Someone did answer via PM, but the more, the merrier. Preferred mode of offsite contact?
PM sent with details.
What’s the difference between making friends now and making friends after you wake up? What’s the difference between making a family now, and making a new family then? (here I’m referencing both this comment about finding new friends, and your comment in the other thread about starting a new family)
If a friendly singularity happens, I think it’s likely that the desire of extroverts like you for companionship and close relationship will have been taken into account along the way and that forming these bonds will still be possible.
Of course right now I’d want to be with my current fiancé, and I’m planning to try to convince her to sign up for cryonics, but if I lost her, I’d still rather live and have to figure out another way to get companionship in the far future than to die.
First of all, my friends aren’t interchangeable. It’s already a big step for me to be willing to make a presorted cryonics-friendly friend as a substitute for getting my entire existing cohort of companions on board, or even just one. Second of all, waiting until after revival introduces another chain of “ifs”—particularly dreadful ifs—into what’s already a long, tenuous chain of ifs.
Of course they aren’t. I’m just saying that I’d prefer making new friends to death, and that despite the fact that I love my friends very much, there’s nothing that says that they are the “best friends I can ever make” and that anybody else can only provide an inferior relationship.
Once again, between the certitude of death and the possibility of life in a post-friendly-singularity world, I’ll take the “ifs” even if it means doing hard things like re-building a social circle (not something easy for me).
I’m just having a really hard time imagining myself making the decision to die because I lost someone (or even everyone). In fact, I just lost my uncle (brain cancer), and I loved him dearly, he was like a second father to me. His death just made me feel even more strongly that I want to live.
But I suppose we could be at opposite ends of the spectrum when it comes to these kinds of things.
I guess I’m just more dependent on ready access to deeply connected others than you? This sounds like a matter of preferences, not a matter of correctly turning those preferences into plans.
If you need friends post suspension you can pay for my suspension (currently my budget goes to X-risk) and I will promise to spend a total of at least one subjective current human lifetime sincerely trying to be the best friend I can for you unless the revived get a total of less than 100 subjective human lifetimes of run-time in which case I will give you 1% of my total run-time instead. If that’s not enough, you can also share your run-time with me. I will even grant you the right to modify my reward centers to directly make me like you in any copy running on run time you give me. This offer doesn’t allow your volition to replace mine in any other respect if the issue is important.
I’d bet karma at 4 to 1 odds that Alicorn finds this proposal deeply disturbing rather than helpful.
You’re on. Alicorn, would you be so kind as to arbitrate? We need you to evaluate which of these three categories Michael’s offer fits in to:
Deeply Disturbing
Helpful
Just ‘somewhat’ disturbing all the way through to indifference.
Would ‘slightly amusing’ count as helpful if it served to create slightly more confidence in the prospect of actively seeking out the friendship the potentially cryonically inclined?
Yep, disturbing. “Deeply” might be pushing it a little. But a) I’ll have to mess with my budget to afford one suspension, let alone two, and while I’d chip in for my sister if she’d let me, people I do not yet know and love are not extended the same disposition. b) There’s presently no way to enforce such a promise. c) Even if there were, that kind of enforcement would itself be creepy, since my ethics would ordinarily oblige me to abide by any later change of mind. d) This arrangement does nothing to ensure that I will enjoy MichaelVassar’s company; I’m sure he’s a great person, but there are plenty of great people I just don’t click with. e) I do not like the idea of friendships with built-in time quotas, I mean, ew.
“Deeply” seemed unlikely given that ‘deeply disturbing’ would have to be reserved in case Michael had seriously offered his services as a mercenary to carry out a kidnapping, decapitation, and non-consensual vitrification.
But it is so efficient! Surely Robin has made a post advocating such arrangements somewhere. ;)
So I guess that’s a “push” on the original terms of the bet, falling between “helpful” and “deeply disturbing”.
Yes, bookkeeper loses his overheads. That’s what the bookie gets for accepting bets with ties.
Now, Robin, there’s a person who regularly deeply disturbs me.
You’re forgetting the part where they revive you only when there is a cure for whatever you died from. You may be revived long before or after they are revived. And if that happens, there’s also the chance they can die again and not be stored before you’re revived. You’d probably have to give instructions to hold off on revival, otherwise, risk the missed connection.
EDIT:
I found all the information I need here: http://www.cryonics.org/become.html
I’m in the process of signing up (yeah, I know, they’re all saying that… But I really am! and plan to post about my experience on LW once it’s all over) and I’ll be your friend too, if you’ll have me as a friend.
Even if you were not signed up and never planned to be, I can always use more friends! What’s your preferred offsite contact method?
I’ve always wondered what the “Add to Friends” button on LW does, so I’m trying it out on you. (I hope you don’t mind!)
It’s a feed agregator. There used to be a link on LessWrong to view all contributions by “Friends”, but it was removed some time past.
I had never noticed that button. I’ll try it too.
I don’t mind at all, but I haven’t found it to do anything much when I’ve tried it.
Indeed not; all it seemed to do (at least on my end) was transform itself into a “Remove from Friends” button. Did anything happen on your end?
I detected no change.
On his overview page, can you see which articles he liked/disliked?
Doesn’t look like it.
I can see bgrah449′s—I think that’s what “Make my votes public” does.
I sent you a private message.
I’m working on it. Is taking a bit longer than planned because insurance company seemed to throw a few extra hoops for me to jump through. (including some stuff from some samples they took from me that they don’t like. Need to see a doc and have them look at the data and pass judgement on it for the insurance company). Hence need to make doc appointment.
Actually having the process underway is probably close enough. Preferred mode of offsite contact?
Am available email, IM, phone or online voice chat. (Any direct meetup depends on where you live, of course)
The first two though would probably be the main ones for me.
Anyways, will PM you specifics (e-addy, phone number, other stuff if you want (as far as IM, lemme know which IM service you use, if any).
Hrm… LWbook: Where giving (or getting) the (extremely) cold shoulder is a plus. ;)
I’ll say it again: It’s much easier for you to sign up alone than it is to convince your friends to sign up with you.
I will sign up when I have a reasonable expectation that I’m not buying myself a one-way ticket to Extrovert Hell.
Given the opening post I am not sure I understand what you are saying. What about being resurrected with the people described would be an Extrovert Hell? That you don’t have any pre revival friends?
I’m referencing a prior thread. Pre-revival friends or family are a prerequisite for me not looking at the prospect of revival with dread instead of hope.
With those values the ‘find friends who are signed up to cryonics’ sounds like the obvious plan. (Well, less obvious than the one where you kidnap your friends, cut of their head and preserve it against their will. But more sane.)
I don’t think most of my friendships would survive kidnapping, decapitation, and non-consensual vitrification, even if my friends survived it.
A friend will help you move. A good friend will help you move a body. A great friend is the body.
That sounded pretty odd until I looked up the parent comment, I gotta tell you.
This is an incredibly good joke.
I bet that online dating and friend making will work a lot better in the future. Can you elaborate about what is so dreadful about waking up without knowing anyone?
But, but!..
Okay, 1) I dislike the “shut up and multiply” sentiment anyway, since it’s so distinctly consequentialist. I will not shut up, and I will only multiply when everything I’m multiplying is really commensurate including in a deontic sense. I will walk away from Omelas should I have occasion. And 2) it’s my freakin’ life. I’m not deciding to deny someone else the chance to be ferried to the future on the basis of it sounding lonely.
Is there some other significance to the links and quote that you hoped I’d extract?
The significant claim seems to be that it is often necessary to quell an instinctive reaction in order to best meet your own preferences. There are some reflectively consistent preferences systems in which it is better to die than to suffer the distress of a lonely revival but there are many more that are not. I take Vladmir’s suggestion to be “make sure this is what you really want, not just akrasia magnified a thousand times”.
Often claims of the shape of Vladimir’s are intended to enforce a norm upon the recipient. In this case the implied ‘should’ is of the kind “action X may best give Y what they want” which is at least slightly less objectionable.
I did a reversal test on the preference; if everybody I cared about disappeared from my life all at once and everybody who remained was as alien as the people of the future will likely be, I would probably want to die, no cryonics required.
I bet that online dating and friend making will work a lot better in the future. There probably exist many people in the future that appreciate your unique knowledge and want to get to know you better.
When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future.
Can you talk more about why your future is so dreadful? Stating that all possible futures are worse than death is a strong statement. In this reversal test, it even assigns a “probably” to being suicidal. I think your flaw in reasoning lies there. I don’t think that being “probably” suicidal in the future is sufficient reason to not visit the future.
In our time, we morally justify the forcible hospitalization and medication of suicidal people until they aren’t suicidal anymore. With Friendly AI, this moral justification may remain true in the future, and once you’re on drugs or other brain enhancements, you’ll probably love life and think your self from your first life absolutely insane for preferring death to glorious existence. Again, I think your desire for deep connections with other people is likely to be nearly immediately fixable in the future. This does sound a little dystopian, but I don’t think there exist very many wake-up futures in which your existential misery can not be fixed.
To me, it seems like in nearly all cases it is worth waiting until the future to decide whether or not it is worth living.
“When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future.”
Woman: You’re from 1999? I’m from 2029! Say, remember when we got invaded by the cybernetic ape army?
Fry: Uh… yeah. Those were some crazy times!
Yeah, uh… threatening me with psychoactive medication is not a good way to make me buy a ticket to the future.
Resistance is illogical, you will be upgraded.
I take it you read “Transmetropolitan?” I don’t think that particular reference case is very likely.
I have not read that (*googles*) series of comic books.
I believe that you are not entitled to your choice of values. Preference and priors are not for grabs.
I cannot make heads nor tails of what you’re trying to convey.
Hmm… At least the content of my position seems to have been rehashed a lot, even if you won’t agree with it.
I believe that your opinion about what your values are has very little influence on what your values actually are, which in the backbone are human-universal values plus a lot of person-specific detail that is so below the level of conscious understanding that isn’t even worth speculating about. Whenever someone states an opinion about their values being extreme, they are seriously wrong about their actual values. Consequently, acting on the misconstrued values is against the person’s own actual values.
I don’t grant nearly as much credence to the idea that there are human-universal values as most people around here seem to. People are a wacky, diverse bunch.
Also, if you have an idea about what my values Really Are that is unconnected to what I tell you about them, I don’t want you anywhere near any decisions about my life. Back! Back! The power of my value of self-determination compels you!
I get my ideas about what people’s values Really Are based on their decisions. How much weight I place on what they tell me about their values varies based on their behaviour and what they say. I don’t make it my business to be anywhere near any decisions about other people’s lives except to the extent that they could impact me and I need to protect my interests.
That assumption (and presumption!) of human-universal values scares me at times. It triggers my instinctive “if you actually had the power to act on that belief I would have to kill you” instinct.
Even with that kind of ruthless self-determination in mind it is true that “acting on the misconstrued values is against the person’s own actual values”. Vladmir’s point is not particularly controversial, whether it applies to you or not is for you to decide and Vladmir to speculate on if he happens to be curious.
My decision to tell you about my values counts as a decision, doesn’t it?
Absolutely. And I weigh that information higher coming from yourself than from many people given my observations of apparent self awareness and maturity somewhat beyond what I expect given your self reported age. Obviously such judgements also vary based on topic and context.
In general, however, my life has been a lot simpler and more successful since realising what people say about their values is not always a reliable indicator.
Friendly AI be the judge (I’m working on that). :-)
By the way, this reminds of Not Taking Over the World (the world is mad and is afraid of getting saved, or course, in the hypothetical scenario where the idea gets taken seriously to begin with!).
Be sure to keep us posted on your progress. It’s always good to know who may need a dose of Sword of Good ahead of time. ;)
I don’t recall hearing that kind of an argument presented here anywhere. Yes, there have been arguments about your values shifting when you happen to achieve power, as well as seemingly altruistic behavior actually working to promote individual fitness. But I don’t think anybody has yet claimed that whenever somebody feels they have extreme values, they are wrong about them.
Furthermore—if the discussion in those referenced posts is the one you’re referring to—I’d be hesitant to claim that the consciously held values are false values. People might actually end up acting on the non-conscious values more than they do on the conscious ones, but that’s no grounds for simply saying “your declared values are false and not worth attention”. If you went down that route, you might as well start saying that since all ethics is rationalization anyway, then any consequentialist arguments that didn’t aim at promoting the maximum fitness of your genes were irrelevant. Not to mention that I would be very, very skeptical of any attempts to claim you knew someone else’s values better than they did.
There have also been posts specifically arguing that those non-conscious values might not actually be your true values.
I’m not arguing for the supremacy of non-conscious values: in many cases, people have good sense of their actual values and consciously resolve their implications, which is what I see as the topic of Which Parts Are “Me”?. The inborn values are not a fixed form, although they are a fixed seed, and their contradictions need to be resolved.
Genes? The expression of that evil alien elder god? They don’t write a default morality.
The links relevant to my argument:
Human universal (we all share the bulk of our values), Complexity of value (there is a lot of stuff coded in the inborn values; one can’t explain away huge chunks of this complexity by asserting them not present in one’s particular values), Fake simplicity (it’s easy to find simple arguments that gloss over a complex phenomenon), No, Really, I’ve Deceived Myself (it’s not a given that one even appreciates the connection of the belief with the asserted content of that belief)
These obviously don’t form a consistent argument, but may give an idea of where I’m coming from. I’m only declining to believe particularly outrageous claims, where I assume the claims being made because of error and not because of the connection to reality; where the claims are not outrageous, they might well indicate the particular ways in which the person’s values deviate from the typical.
I suspect this community overemphasizes the extent to which human universals are applicable to individuals (as opposed to cultures), and underemphasizes individual variation. I should probably write a post regarding this at some point.
Well put. My own uncertainty with regard to my values is the main reason I’m reluctant to take “mind hacks” out for casual spins—I’ve been quite surprised in the past by how sophisticated subconscious reactions can be. That said, I don’t think I could bring myself to ignore my consciously-held values to the point of doing something as significant as signing up for cryonics, were that necessary.
I thought “I’m so embarrassed I could die” was just a figure of speech.
You weren’t convinced by Eliezer’s post? Do you think signing up for cryonics will get you ostracized from your social circles? Besides the two witnesses on some of the forms, nobody will know unless you tell them or flaunt your ID tags. Are there no two people who you are willing to trust with a secret?
...This has nothing to do with embarrassment. The problem isn’t that people will stop being my friend over it, the problem is that they will all die and then the best case scenario will be that I will wake up in a bright new future completely alone.
I’m actually still confused. That doesn’t sound like ‘Extrovert Hell’. Extroverts would just make a ton of new friends straight away. A lone introvert would have more trouble. Sure, it would be an Extrovert Very Distressing Two Weeks, but death is like that. (Adjust ‘two weeks’ to anything up to a decade depending on how vulnerable to depression you believe you will be after you are revived.)
I honestly do not think I’d last two weeks. If I go five conscious hours without having a substantial conversation with somebody I care about, I feel like I got hit by a brick wall. I’m pretty sure I only survived my teens because I had a pesky sister who prevented me from spending too long in psychologically self-destructive seclusion.
This sounds like an unrealistically huge discount rate. To be precise, you anticipate:
(a) One week of being really unhappy while you go through the process of making new friends (perhaps with someone else who’s really unhappy for similar reasons). I assume here that you do not find the process of “making a new friend” to be itself enjoyable enough to compensate. I also suspect that you would start getting over the psychological shock almost immediately, but let’s suppose it actually does take until you’ve made a friend deep enough to have intimate conversations with, and let’s suppose that this does take a whole week.
(b) N years of living happily ever after.
It’s really hard to see how the former observer-moments outweigh the latter observer-moments.
I think it’s this that commenters are probably trying to express when they wonder if you’re thinking in the mode we name “rational”: it seems more like a decision made by mentally fleeing from the sheer terror of imagining the worst possible instant of the worst possible scenario, than any choice made by weighing and balancing.
I also tend to think of cryonics as a prophylactic for freak occurrences rather than inevitable death of old age, meaning that if you sign up now and then have to get suspended in the next 10 years for some reason, I’d rate a pretty good chance that you wake up before all your friends are dead of old age. But that shouldn’t even be an issue. As soon as you weigh a week against N years, it looks pretty clear that you’re not making your decision around the most important stakes in the balance.
I know you don’t endorse consequentialism, but it seems to me that this is just exactly the sort of issue where careful verbal thinking really does help people in real life, a lot—when people make decisions by focusing on one stake that weighs huge in their thoughts but obviously isn’t the most important stake, where here the stakes are “how I (imagine) feeling in the very first instant of waking up” versus “how I feel for the rest of my entire second life”. Deontologist or not, I don’t see how you could argue that it would be a better world for everyone if we all made decisions that way. Once you point it out, it just seems like an obvious bias—for an expected utility maximizer, a formal bias; but obviously wrong even in an informal sense.
I think that the distress would itself inhibit me in my friend-making attempts. It is a skill that I have to apply, not a chemical reaction where if you put me in a room with a friendly stranger and stir, poof, friendship.
Um… would I deeply offend you if I suggested that, perhaps, your worst fears and nightmares are not 100% reflective of what would actually happen in reality? I mean, what you’re saying here is that if you wake up without friends, you’ll be so shocked and traumatized that you’ll never make any friends again ever, despite any future friend-finding or friend-making-prediction software that could potentially be brought to bear. You’re saying that your problem here is unsolvable in the long run by powers up to and including Friendly superintelligence and it just doesn’t seem like THAT LEVEL of difficulty. Or you’re saying that the short-run problem is so terrible, so agonizing, that no amount of future life and happiness can compensate for it, and once again it just doesn’t seem THAT BAD. And I’ve already talked about how pitting verbal thought against this sort of raw fear really is one of those places where rationality excels at actually improving our lives.
Are you sure this is your true rejection or is there something even worse waiting in the wings?
I’m making projections based on psychological facts about myself. Anticipating being friendless and alone makes me unhappy all by itself; but I do have some data on how I get when it actually happens. I don’t think I would be able to bring to bear these clever solutions if that happened (to the appropriate greater magnitude).
I do consider this a problem, so I am actively trying to arrange to have someone I’d find suitable signed up (in either direction would work for). This is probably a matter of time, since my top comment here did yield responses. I’d bet you money, if you like, that (barring financial disaster on my part) I’ll be signed up within the next two years.
I asked this elsewhere, but I’ll ask again: what if the unhappiness and distress caused by the lack of friends could suddenly just disappear? If you could voluntarily suppress it, or stop suppressing it? There will almost certainly be technology in a post-revival future to let you do that, and you could wake up with that ability already set up.
This is an interesting point to consider, and I’m one who’s offered a lot of reasons to not sign up for cryonics.
For the record, a lower bound on my “true rejection” is “I’d sign up if it was free”.
What about this: leave instructions with your body to not revive you until there is technology that would allow you to temporarily voluntarily suppress your isolation anxiety until you got adjusted to the new situation and made some friends.
If you don’t like how extraverted you are, you don’t have to put up with it after you get revived.
But the availability of such technology would not coincide with my volunteering to use it.
Would you be opposed to using it? Would you be opposed to not returning to consciousness until the technology had been set up for you (i.e. installed in your mind), so it would be immediately available?
I assign a negligible probability that there exists some way I’d find acceptable of achieving this result. It sounds way creepy to me.
I find that surprising. (I don’t mean to pass judgment at all. Values are values.) Would you call yourself a transhumanist? I wonder how many such people have creepy feelings about mind modifications like that. I would have thought it’s pretty small, but now I’m not sure. I wonder if reading certain fiction tends to change that attitude.
I would call myself a transhumanist, yes. Humans suck, let’s be something else—but I would want such changes to myself to be very carefully understood by me first, and if at all possible, directed from the inside. I mentioned elsewhere that I’d try cognitive exercises if someone proposed them. Brain surgery or drugs or equivalents, though, I am not open to without actually learning what the heck they’d entail (which would take more than the critical time period absent other unwelcome intervention), and these are the ones that seem captured by “technology”.
Hmm. What I had in mind isn’t something I would call brain surgery. It would be closer to a drug. My idea (pretty much an “outlook” from Egan’s Diaspora) is that your mind would be running in software, in a huge neuron simulator, and that the tech would simply inhibit the output of certain, targeted networks in your brain or enhance others. This would obviously be much more targeted than inert drugs could achieve. (I guess you might be able to achieve this in a physical brain with nanotech.)
I’m not sure if this changes your intuition any. Perhaps you would still be uncomfortable with it without understanding it first. But if you trust the people who would be reviving you to not torture and enslave you, you could conceivably leave enough detailed information about your preferences for you to trust them as a first-cut proxy on the mind modification decision. (Though that could easily be infeasible.) Or perhaps you could instruct them to extrapolate from your brain whether you would eventually approve of the modification, if the extrapolation wouldn’t create a sentient copy of you. (I’m not sure if that’s possible, but it might be.)
I trust the inhabitants of the future not to torture and enslave me. I don’t trust them not to be well-intentioned evil utilitarians who think nothing of overriding my instructions and preferences if that will make me happy. So I’d like to have the resources to be happy without anybody having to be evil to me.
But that wouldn’t be making you happy. It’d be making someone very much like you happy, but someone you wouldn’t have ever matured into. (You may still care that the latter person isn’t created, or not want to pay for cryonics just for the latter person to be created; that’s not the point.) I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you. Do you think they would prevent someone from committing suicide? If they would make unwanted modifications to you before reviving you, why wouldn’t they be willing to make modifications to unconsenting living people*? They would see your “do not revive unless...” instructions as a suicide note.
* Perhaps because they view you as a lower life form for which more paternalism is warranted than for normal transhuman.
Of course that’s not a strong argument. If you want to be that cautious, you can.
I don’t. I wouldn’t be very surprised to wake up modified in some popular way. I’m protecting the bits of me that I especially want safe.
Maybe.
Who says they’re not? (Or: Maybe living people are easier to convince.)
How about a scenario where they gave you something equivalent to a USB port, and the option to plug in an external, trivially removable module that gave you more conscious control over your emotional state but didn’t otherwise affect your emotions? That still involves brain surgery (to install the port), but it doesn’t really seem to be in the same category as current brain surgery at all.
Hmmm. That might work. However, the ability to conceptualize one way to achieve the necessary effect doesn’t guarantee that it’s ever going to be technically feasible. I can conceptualize various means of faster-than-light travel, too; it isn’t obliged to be physically possible.
I suspect I have a more complete and reality-connected model of how such a system might work than you have of ftl. :)
I’m basically positing a combination of more advanced biofeedback and non-pleasure-center-based wireheading, for the module: You plug it in, and it starts showing you readings for various systems, like biofeedback does, so that you can pinpoint what’s causing the problem on a physical level. Actually using the device would stimulate relevant brain-regions, or possibly regulate more body-based components of emotion like heart- and breathing-rate and muscle tension (via the brain regions that normally do that), or both.
I’m also assuming that there would be considerable protection against accidentally stimulating either the pleasure center or the wanting center, to preclude abuse, if they even make those regions stimulateable in the first place.
Of course I know how FTL works! It involves hyperspace! One gets there via hyperdrive! Then one can get from place to place hyper-fast! It’s all very hyper!
*ahem*
You have a point. But my more emotionally satisfying solution seems to be fairly promising. I’ll turn this over in my head more and it may serve as a fallback.
Wow. That isn’t an exaggerating? Is that what normal extraverts are like or are you an outlier. So hard to imagine.
That seems like a fairly extreme outlier to me. I’m an extrovert, and for me that appears to mean simply that I prefer activities in which I interact with people to activities where I don’t interact with people.
Nope, not exaggerating. I say “five hours” because I timed it. I don’t know if I’m an outlier or not; most of my friends are introverts themselves.
Sounds like “five hours” might be something worth the pain of practicing to extend. Maybe not for you, but outlier time-brittle properties like that in me worry me.
Refraining from pushing the five hour limit harder than I have to is a very important part of my mood maintenance, which lets me not be on drugs, in danger of hurting myself, or just plain unhappy all the time. The farther I let myself get, the harder it is to muster the motivation to use my recovery strategies, and the longer they take to work.
From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.
I love having a good conversation with a friend. But I could also probably go for weeks without having such a thing. Probably the longest I’ve been alone is a week and I enjoyed it.
I can’t see from your viewpoint, but from my viewpoint you should do everything in your power to change how reliant you are on others. It seems like if you are so reliant on others that you are going to, consciously or not, change your values and beliefs merely to ensure that you have people who you can associate with.
I’m dependent on many things, and the ability to chat with people is one of the easiest to ensure among them. If I decide that I’m too dependent on external factors, I think I’ll kick the brie habit before I try to make my friends unnecessary.
I’m not sure whence your concern that I’ll change my values and beliefs to ensure that I have people I can associate with. I’d consider it really valuable evidence that something was wrong with my values and beliefs if nobody would speak to me because of them. That’s not the case—I have plenty of friends and little trouble making more when the opportunity presents itself—so I’m not sure why my beliefs and values might need to shift to ensure my supply.
Perhaps I misunderstood what your “dependency” actually is. If your dependency was that you really need people to approve of you (a classic dependency and the one I apparently wrongly assumed), then it seems like your psyche is going to be vastly molded by those around you.
If your dependency is one of human contact, than the pressure to conform would probably me much less of a thing to worry about.
I would like to address your first paragraph...”making your friends unnecessary” isn’t what I suggested. What I had in mind was making them not so necessary that you have to have contact with them every few hours.
Anyway, it’s all academic now, because if you don’t think it’s a problem, I certainly don’t think it’s a problem.
ETA: I did want to point out that I have changed over time. During my teenage years I was constantly trying to be popular and get others to like me. Now, I’m completely comfortable with being alone and others thinking I’m wrong or weird.
Well, I like approval. But for the purposes of not being lonely, a heated argument will do!
If you cannot so imagine then perhaps making judgements on what is ‘unhealthy’ for a person that does rely so acutely on others may not be entirely reliable. If someone clearly has a different neurological makeup it can be objectionable to either say they should act as you do or that they should have a different neurological makeup.
It is absolutely fascinating to me to see the ‘be more like me’ come from the less extroverted to the extrovert.
Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill. The fact that I need contact has made me, through sheer desperation and resulting time devoted to practice, okay at getting contact; but that’s something that was forced, not enabled, by my being an extrovert.
Definitely. It could get you killed. It had me wondering, for example, if the ~5 hours figure is highly context dependent: You are on a hike with a friend and 12 hours from civilisation. Your friend breaks a leg. He is ok, but unable to move far and in need of medical attention. You need to get help. Does the fact that every step you take is bound up in your dear friend’s very survival help at all? Or is the brain like “No! Heroic symbolic connection sucks. Gimme talking or physical intimacy now. 5 hours I say!”? (No offence meant by mentioning a quirk of your personality as a matter of speculative curiosity. I just know the context and nature of isolation does make a difference to me, even though it takes around 5 weeks for such isolation to cause noticeable degradation of my sanity.)
If it was my handicap I would be perfectly fine with an FAI capping any distress at, say, the level you have after 3 hours. Similarly, if I was someone who was unable to endure 5 consecutive hours of high stimulus social exposure without discombobulating I would want to have that weakness removed. But many people object to being told that their natural state is unhealthy or otherwise defective and in need of repair and I consider that objection a valid one.
I would certainly endure the discomfort involved in saving my friend in the scenario you describe. I’d do the same thing if saving my friend involved an uncomfortable but non-fatal period of time without, say, water, food, or sleep. That doesn’t mean my brain wouldn’t report on its displeasure with the deprivation while I did so.
water ~ few days
food ~ a few weeks
sleep ~ a few days
social contact ~ a handful of hours
Water depends on temperature, food on exertion both mental and physical. I speculate if the context influenced the rate of depletion in similar manner.
I very intentionally had qualifiers a-many in my comment to try and make it apparent that I wasn’t “judging” Alicorn. “I cannot imagine” is perhaps the wrong phrase. “I find it hard to imagine” would be better, I think.
Perhaps I’m crazy, but I don’t think pointing out the disadvantages of the way someone thinks/feels is or should be objectionable.
If someone differs from me in what kind of vegetables taste good, or if they like dry humor, or whatever, I’m not going to try and tell them they may want to rethink their position. There’s no salient disadvantages to those sort of things.
If Alicorn had said, “I really prefer human contact and I just get a little uncomfortable without it after 5 hours” I wouldn’t have even brought it up.
If someone has a trait that does have particular disadvantages, I just don’t see how discussing it with them is objectionable.
Perhaps the person to say whether it’s objectionable would be Alicorn. :)
I also think it’s extremely disproportionate to die because the old friends are gone. A post FAI world would be a Nice Enough Place that they will not even remotely mistreat you and you will not remotely regret your signing up.
Because the last time you woke up in a brand-new world with no friends turned out so badly?
If you’re talking about how I have no prior experience with revival, all I can say is that I have to make plans for the future based on what predictions (however poor) I can make now. If you’re talking about how I was born and that turned out okay, I have… y’know.. parents.
For many people, parents are a neutral or net negative presence. But alright.
If you had to choose between being born to an orphanage and not being born—a situation which is symmetrical as far as I can see to your objection to cryonics—would you choose to not be born?
That depends on the circumstances which would have led to me being born to an orphanage. If somebody is going around creating people willy-nilly out of genetic material they found lying around, uh, no, please stop them, I’d be okay with not having been born. If I’m an accident and happened to have a pro-life mother in this hypothetical… well, the emphasis in pro-choice is “choice”, so in that case it depends whether someone would swoop in and prevent my birth against her will or whether she would change her mind. In the latter case, the abortion doctor has my blessing. In the former case, (s)he hasn’t, but only because I don’t think medically elective surgery should be performed on unwilling patients, not because I think the lives of accidental fetuses are particularly valuable. If I was conceived by a stable, loving, child-wanting couple and my hypothetical dad was hit by a bus during my gestation and my mom died in childbirth, then I’d be okay with being born as opposed to not being born.
If you don’t like being alone in the bright new future you can always off yourself.
Or try to make friends with other recently-revived cryonicists. That’s what extroverts are good at, right?
That would be a fine way to spend money, wouldn’t it, paying them to not let me die only for me to predictably undo their work?
My comment about suicide was a joke to contrast my recommendation: make friends.
I think you assign high probability to all of the following:
None of your current friends will ever sign up for cryonics.
You won’t make friends with any current cryonicists.
You won’t make friends after being revived.
Your suicidal neediness will be incurable by future medicine.
Please correct me if I’m wrong. If you think any of those are unlikely and you think cryonics will work, then you should sign up by yourself.
Yeah. Even though a couple of them have expressed interest, there is a huge leap from being interested to actually signing up.
This is my present plan. We’ll see if it works.
I’m not willing to bet on this.
I do not want my brain messed with. If I expected to arrive in a future that would mess with my brain without my permission, I would not want to go there.
I have to say, if 3 fails, I would tend to downvote that future pretty strongly. We seem to have very different ideas of what a revival-world will and should look like, conditional on revival working at all.
I was including a “promptly enough” in the “will make friends” thing. I’m sure that, if I could stay alive and sane long enough, I’d make friends. I don’t think I could stay alive and sane and lonely long enough to make close enough friends without my brain being messed with (not okay) or me being forcibly prevented from offing myself (not fond of this either).
If your life were literally at stake and I were a Friendly AI, I bet I could wake you up next to someone who could become fast friends with you within five hours. It doesn’t seem like a weak link in the chain, let alone the weakest one.
It is the most terrifying link in the chain. Most of the other links, if they break, just look like a dead Alicorn, not a dead Alicorn who killed herself in a fit of devastating, miserable starvation for personal connection.
If you thought it was reasonably likely that, given the success of cryonics, you’d be obliged to live without something you’d presently feel suicidal without (I’m inclined to bring up your past analogy of sex and heroin fix here, but substitute whatever works for you), would you be so gung-ho?
I could sorta understand this if we were talking about one person you couldn’t live without, it’s the idea of worrying about not having any deep friends in general that’s making me blink.
Some people are convinced they’ll have to live without the strangest things after the Singularity… having encountered something possibly similar before, I do seriously wonder if you might be suffering from a general hope-in-the-future deficiency.
PS/Edit: Spider Robinson’s analogy, not mine.
If you were the friendly AI and Alicorn failed to make a fast friend as predicted and that resulted in suicidal depression, would that depression be defined as mental illness and treated as such? Would recent wake-ups have the right to commit suicide? I think that’s an incredibly hard question so please don’t answer if you don’t want to.
Have you written anything on suicide in the metaethics sequence or elsewhere?
And the relevant question extends to the assumption behind the phrase ‘and treated as such’. Do people have the right to be nuts in general?
I suppose having to rigorously prove the mathematics behind these questions is why Eliezer is so much more pessimistic about the probability of AI killing us than I am.
I have only managed to live without particular persons who’ve departed from my life for any reason by virtue of already having other persons to console me.
That said, there are a handful of people whose loss would trouble me especially terribly, but I could survive it with someone else around to grieve with.
I would think that the corporate reviving you would be either a foundation of your family, a general charity organization or a fan club of yours (Don’t laugh! There are fan clubs for super stars in India. Extend it further in the future and each LW commentor might have a fan club.) Since you will be, relatively speaking, an early adopter of cryonics, you will be relatively, a late riser. Cryonics goes LIFO, if I understand it correctly.
I’m pretty sure now that your fears are explicitly stated in a public forum, they are on the record for almost all eternity and they will be given sufficient consideration by those reviving you.
Eliezer has already presented one solution. A make-do best friend who can be upgraded to sentience whenever need be.
A simpler solution will be a human child, holding your palm and saying “I’m your great great grand child”. Are you still sure you’ll still not care enough? (Dirty mind hack, I understand, but terribly easy to implement)
Probably worth backing up though, in the form of a stone tablet adjacent to your body.
Alcor do keep some of your stuff in a secret location, but given problems with data retrieval from old media it might be good if they offered an explicit service to store your data—which I’d expect them to defer to providers like Amazon, but handle the long-term problems of moving to new providers as the need arises, and of decryption only on revival.
I would take the “I’m your great great grandchild” solution in a heartbeat—but I do not already have children, and something could still come up to prevent me from having them (and hence great great grandchildren).
If you’d take that solution, why not a great great … great grand niece? Or distant cousin? Any human child of that time will be related to you at some remove.
My sister doesn’t have children yet either, and may or may not in the future. It does matter if they’re a relation I’d ever be disposed to see at Christmas, which has historically bottomed out with second cousins.
Then it looks like I misunderstood. Say you have a child, then get preserved (though no one else you know does). Then say you wake up, it’s 500 years in the future, and you meet your great (great … great) great grandchild, someone you would never have seen at Christmas otherwise. Would this satisfy you?
If so, then you don’t have to worry. You will have relatives alive when you’re revived. Even if they’re descendants of cousins or second cousins. And since it will be 500 years in the future, you are equally likely to see your cousin’s 2510 descendant and your 2510 descendant at Christmas (that is, not at all).
If I had a child, I’d sign up me and said child simultaneously—problem solved right there. There’s no need to postulate any additional descendants to fix my dilemma.
I can’t get enthusiastic about second cousins 30 times removed. I wouldn’t expect to have even as much in common with them as I have in common with my second cousins now (with whom I can at least swap reminisces about prior Christmases and various relatives when the situation calls for it).
You can’t guarantee that your child will go through with it, even if you sign em up.
Then why can you get enthusiastic about a great great grandchild born after you get frozen?
I can’t guarantee it, no, but I can be reasonably sure—someone signed up from birth (with a parent) would not have the usual defense mechanisms blocking the idea.
Direct descent seems special to me.
I find this thread fascinating.
I can usually think about something enough and change my feelings about it through reason.
For example, if I thought “direct descent seems special”, I could think about all the different ideas like the questions Blueberry asks and change my actual emotions about the subject.
I suspect this comes from my guilty pleasure...I glee at biting-the-bullet.
Is this not the case with you?
I do not have a reliable ability to change my emotional reactions to things in a practically useful time frame.
If you want make friends with cryonicists, sign up. For every one person I meet who is signed up, I hear excuses from ten others: It won’t work. It will work but I could be revived and tortured by an evil AI. The freezing process could cause insanity. It’ll probably work but I’ve been too lazy to sign up. I’m so needy I’ll kill myself without friends. Etc.
It gets old really fast.
Wow, calling me names has made me really inclined to take advice from you. I’ll get right on that, since you’re so insightful about my personal qualities and must know the best thing to do in this case, too.
Are you supposed to be the extrovert in the ‘extrovert hell’ scenario? Extroverts generally don’t have trouble finding new friends, or fear a situation where they find themselves surrounded by strangers.
I’m the extrovert, yes. In the sense of needing people, not in the sense of finding them easy to be around (I have a friend who finds it fantastically amusing to call herself a social introvert and me an antisocial extrovert, which is a fair enough description). I actually get very little value from interacting with strangers, especially in large groups. I need people who I’m reasonably close to in order to accomplish anything, and that takes some time to build up to. None of my strategies for making new friends will be present in a no-pre-reviv-friends-or-family wake-up scenario.
If the choice were available, would you change any of that?
I think that would depend heavily on the mechanism by which it’d be changed. I’d try cognitive exercises or something to adjust the value I get from strangers and large groups; I don’t want to be drugged.
Hmm, ok. I’d say you’re using ‘extrovert’ in a fairly non-standard way but I think I understand what you’re saying now.
I think of an extrovert as someone who recharges by being around other people, and an introvert as someone who recharges by being alone, regardless of social proclivity or ability.
“I make new friends easily” is one of the standard agree/disagree statements used to test for extraversion which is why I find this usage a little unusual.
But it’s not the only agree/disagree statement on the test, right?
No, it seems Alicorn’s usage of extrovert is valid. It is just not what I’d previously understood by the word. The ‘makes friends easily’ part of extrovert is the salient feature of extraversion for me.
It’s all on an introvert/extrovert test, but to me the salient feature of extroversion is finding interaction with others energizing and finding being alone draining. Introverts find it tiring to interact with others and they find being alone energizing, on a continuous spectrum.
I fall in the dead center on an introvert/extrovert test; I’m not sure how uncommon that is.
(Although naturally there tends to be a correlation with the latter two.)
Maybe you could specify that you only want to be revived if some of your friends are alive.
I’ll certainly do that on signup; but if I don’t think that condition will ever obtain, it’d be a waste.
I’m pretty sure you will have friends and relatives living in 2070. Do you think it’ll be more than 60 years before cryonics patients are revived? Do you think it’ll be more than 60 years before we can reverse aging?
I think it is reasonably likely that those tasks will take longer than that, yes.