...This has nothing to do with embarrassment. The problem isn’t that people will stop being my friend over it, the problem is that they will all die and then the best case scenario will be that I will wake up in a bright new future completely alone.
I’m actually still confused. That doesn’t sound like ‘Extrovert Hell’. Extroverts would just make a ton of new friends straight away. A lone introvert would have more trouble. Sure, it would be an Extrovert Very Distressing Two Weeks, but death is like that. (Adjust ‘two weeks’ to anything up to a decade depending on how vulnerable to depression you believe you will be after you are revived.)
I honestly do not think I’d last two weeks. If I go five conscious hours without having a substantial conversation with somebody I care about, I feel like I got hit by a brick wall. I’m pretty sure I only survived my teens because I had a pesky sister who prevented me from spending too long in psychologically self-destructive seclusion.
This sounds like an unrealistically huge discount rate. To be precise, you anticipate:
(a) One week of being really unhappy while you go through the process of making new friends (perhaps with someone else who’s really unhappy for similar reasons). I assume here that you do not find the process of “making a new friend” to be itself enjoyable enough to compensate. I also suspect that you would start getting over the psychological shock almost immediately, but let’s suppose it actually does take until you’ve made a friend deep enough to have intimate conversations with, and let’s suppose that this does take a whole week.
(b) N years of living happily ever after.
It’s really hard to see how the former observer-moments outweigh the latter observer-moments.
I think it’s this that commenters are probably trying to express when they wonder if you’re thinking in the mode we name “rational”: it seems more like a decision made by mentally fleeing from the sheer terror of imagining the worst possible instant of the worst possible scenario, than any choice made by weighing and balancing.
I also tend to think of cryonics as a prophylactic for freak occurrences rather than inevitable death of old age, meaning that if you sign up now and then have to get suspended in the next 10 years for some reason, I’d rate a pretty good chance that you wake up before all your friends are dead of old age. But that shouldn’t even be an issue. As soon as you weigh a week against N years, it looks pretty clear that you’re not making your decision around the most important stakes in the balance.
I know you don’t endorse consequentialism, but it seems to me that this is just exactly the sort of issue where careful verbal thinking really does help people in real life, a lot—when people make decisions by focusing on one stake that weighs huge in their thoughts but obviously isn’t the most important stake, where here the stakes are “how I (imagine) feeling in the very first instant of waking up” versus “how I feel for the rest of my entire second life”. Deontologist or not, I don’t see how you could argue that it would be a better world for everyone if we all made decisions that way. Once you point it out, it just seems like an obvious bias—for an expected utility maximizer, a formal bias; but obviously wrong even in an informal sense.
I think that the distress would itself inhibit me in my friend-making attempts. It is a skill that I have to apply, not a chemical reaction where if you put me in a room with a friendly stranger and stir, poof, friendship.
Um… would I deeply offend you if I suggested that, perhaps, your worst fears and nightmares are not 100% reflective of what would actually happen in reality? I mean, what you’re saying here is that if you wake up without friends, you’ll be so shocked and traumatized that you’ll never make any friends again ever, despite any future friend-finding or friend-making-prediction software that could potentially be brought to bear. You’re saying that your problem here is unsolvable in the long run by powers up to and including Friendly superintelligence and it just doesn’t seem like THAT LEVEL of difficulty. Or you’re saying that the short-run problem is so terrible, so agonizing, that no amount of future life and happiness can compensate for it, and once again it just doesn’t seem THAT BAD. And I’ve already talked about how pitting verbal thought against this sort of raw fear really is one of those places where rationality excels at actually improving our lives.
Are you sure this is your true rejection or is there something even worse waiting in the wings?
I’m making projections based on psychological facts about myself. Anticipating being friendless and alone makes me unhappy all by itself; but I do have some data on how I get when it actually happens. I don’t think I would be able to bring to bear these clever solutions if that happened (to the appropriate greater magnitude).
I do consider this a problem, so I am actively trying to arrange to have someone I’d find suitable signed up (in either direction would work for). This is probably a matter of time, since my top comment here did yield responses. I’d bet you money, if you like, that (barring financial disaster on my part) I’ll be signed up within the next two years.
I asked this elsewhere, but I’ll ask again: what if the unhappiness and distress caused by the lack of friends could suddenly just disappear? If you could voluntarily suppress it, or stop suppressing it? There will almost certainly be technology in a post-revival future to let you do that, and you could wake up with that ability already set up.
What about this: leave instructions with your body to not revive you until there is technology that would allow you to temporarily voluntarily suppress your isolation anxiety until you got adjusted to the new situation and made some friends.
If you don’t like how extraverted you are, you don’t have to put up with it after you get revived.
Would you be opposed to using it? Would you be opposed to not returning to consciousness until the technology had been set up for you (i.e. installed in your mind), so it would be immediately available?
I find that surprising. (I don’t mean to pass judgment at all. Values are values.) Would you call yourself a transhumanist? I wonder how many such people have creepy feelings about mind modifications like that. I would have thought it’s pretty small, but now I’m not sure. I wonder if reading certain fiction tends to change that attitude.
I would call myself a transhumanist, yes. Humans suck, let’s be something else—but I would want such changes to myself to be very carefully understood by me first, and if at all possible, directed from the inside. I mentioned elsewhere that I’d try cognitive exercises if someone proposed them. Brain surgery or drugs or equivalents, though, I am not open to without actually learning what the heck they’d entail (which would take more than the critical time period absent other unwelcome intervention), and these are the ones that seem captured by “technology”.
Hmm. What I had in mind isn’t something I would call brain surgery. It would be closer to a drug. My idea (pretty much an “outlook” from Egan’s Diaspora) is that your mind would be running in software, in a huge neuron simulator, and that the tech would simply inhibit the output of certain, targeted networks in your brain or enhance others. This would obviously be much more targeted than inert drugs could achieve. (I guess you might be able to achieve this in a physical brain with nanotech.)
I’m not sure if this changes your intuition any. Perhaps you would still be uncomfortable with it without understanding it first. But if you trust the people who would be reviving you to not torture and enslave you, you could conceivably leave enough detailed information about your preferences for you to trust them as a first-cut proxy on the mind modification decision. (Though that could easily be infeasible.) Or perhaps you could instruct them to extrapolate from your brain whether you would eventually approve of the modification, if the extrapolation wouldn’t create a sentient copy of you. (I’m not sure if that’s possible, but it might be.)
I trust the inhabitants of the future not to torture and enslave me. I don’t trust them not to be well-intentioned evil utilitarians who think nothing of overriding my instructions and preferences if that will make me happy. So I’d like to have the resources to be happy without anybody having to be evil to me.
But that wouldn’t be making you happy. It’d be making someone very much like you happy, but someone you wouldn’t have ever matured into. (You may still care that the latter person isn’t created, or not want to pay for cryonics just for the latter person to be created; that’s not the point.) I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you. Do you think they would prevent someone from committing suicide? If they would make unwanted modifications to you before reviving you, why wouldn’t they be willing to make modifications to unconsenting living people*? They would see your “do not revive unless...” instructions as a suicide note.
* Perhaps because they view you as a lower life form for which more paternalism is warranted than for normal transhuman.
Of course that’s not a strong argument. If you want to be that cautious, you can.
How about a scenario where they gave you something equivalent to a USB port, and the option to plug in an external, trivially removable module that gave you more conscious control over your emotional state but didn’t otherwise affect your emotions? That still involves brain surgery (to install the port), but it doesn’t really seem to be in the same category as current brain surgery at all.
Hmmm. That might work. However, the ability to conceptualize one way to achieve the necessary effect doesn’t guarantee that it’s ever going to be technically feasible. I can conceptualize various means of faster-than-light travel, too; it isn’t obliged to be physically possible.
I suspect I have a more complete and reality-connected model of how such a system might work than you have of ftl. :)
I’m basically positing a combination of more advanced biofeedback and non-pleasure-center-based wireheading, for the module: You plug it in, and it starts showing you readings for various systems, like biofeedback does, so that you can pinpoint what’s causing the problem on a physical level. Actually using the device would stimulate relevant brain-regions, or possibly regulate more body-based components of emotion like heart- and breathing-rate and muscle tension (via the brain regions that normally do that), or both.
I’m also assuming that there would be considerable protection against accidentally stimulating either the pleasure center or the wanting center, to preclude abuse, if they even make those regions stimulateable in the first place.
Of course I know how FTL works! It involves hyperspace! One gets there via hyperdrive! Then one can get from place to place hyper-fast! It’s all very hyper!
*ahem*
You have a point. But my more emotionally satisfying solution seems to be fairly promising. I’ll turn this over in my head more and it may serve as a fallback.
That seems like a fairly extreme outlier to me. I’m an extrovert, and for me that appears to mean simply that I prefer activities in which I interact with people to activities where I don’t interact with people.
Sounds like “five hours” might be something worth the pain of practicing to extend. Maybe not for you, but outlier time-brittle properties like that in me worry me.
Refraining from pushing the five hour limit harder than I have to is a very important part of my mood maintenance, which lets me not be on drugs, in danger of hurting myself, or just plain unhappy all the time. The farther I let myself get, the harder it is to muster the motivation to use my recovery strategies, and the longer they take to work.
From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.
I love having a good conversation with a friend. But I could also probably go for weeks without having such a thing. Probably the longest I’ve been alone is a week and I enjoyed it.
I can’t see from your viewpoint, but from my viewpoint you should do everything in your power to change how reliant you are on others. It seems like if you are so reliant on others that you are going to, consciously or not, change your values and beliefs merely to ensure that you have people who you can associate with.
I’m dependent on many things, and the ability to chat with people is one of the easiest to ensure among them. If I decide that I’m too dependent on external factors, I think I’ll kick the brie habit before I try to make my friends unnecessary.
I’m not sure whence your concern that I’ll change my values and beliefs to ensure that I have people I can associate with. I’d consider it really valuable evidence that something was wrong with my values and beliefs if nobody would speak to me because of them. That’s not the case—I have plenty of friends and little trouble making more when the opportunity presents itself—so I’m not sure why my beliefs and values might need to shift to ensure my supply.
Perhaps I misunderstood what your “dependency” actually is. If your dependency was that you really need people to approve of you (a classic dependency and the one I apparently wrongly assumed), then it seems like your psyche is going to be vastly molded by those around you.
If your dependency is one of human contact, than the pressure to conform would probably me much less of a thing to worry about.
I would like to address your first paragraph...”making your friends unnecessary” isn’t what I suggested. What I had in mind was making them not so necessary that you have to have contact with them every few hours.
Anyway, it’s all academic now, because if you don’t think it’s a problem, I certainly don’t think it’s a problem.
ETA: I did want to point out that I have changed over time. During my teenage years I was constantly trying to be popular and get others to like me. Now, I’m completely comfortable with being alone and others thinking I’m wrong or weird.
From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.
If you cannot so imagine then perhaps making judgements on what is ‘unhealthy’ for a person that does rely so acutely on others may not be entirely reliable. If someone clearly has a different neurological makeup it can be objectionable to either say they should act as you do or that they should have a different neurological makeup.
It is absolutely fascinating to me to see the ‘be more like me’ come from the less extroverted to the extrovert.
It is absolutely fascinating to me to see the ‘be more like me’ come from the less extroverted to the extrovert.
Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill. The fact that I need contact has made me, through sheer desperation and resulting time devoted to practice, okay at getting contact; but that’s something that was forced, not enabled, by my being an extrovert.
Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill.
Definitely. It could get you killed. It had me wondering, for example, if the ~5 hours figure is highly context dependent: You are on a hike with a friend and 12 hours from civilisation. Your friend breaks a leg. He is ok, but unable to move far and in need of medical attention. You need to get help. Does the fact that every step you take is bound up in your dear friend’s very survival help at all? Or is the brain like “No! Heroic symbolic connection sucks. Gimme talking or physical intimacy now. 5 hours I say!”? (No offence meant by mentioning a quirk of your personality as a matter of speculative curiosity. I just know the context and nature of isolation does make a difference to me, even though it takes around 5 weeks for such isolation to cause noticeable degradation of my sanity.)
If it was my handicap I would be perfectly fine with an FAI capping any distress at, say, the level you have after 3 hours. Similarly, if I was someone who was unable to endure 5 consecutive hours of high stimulus social exposure without discombobulating I would want to have that weakness removed. But many people object to being told that their natural state is unhealthy or otherwise defective and in need of repair and I consider that objection a valid one.
I would certainly endure the discomfort involved in saving my friend in the scenario you describe. I’d do the same thing if saving my friend involved an uncomfortable but non-fatal period of time without, say, water, food, or sleep. That doesn’t mean my brain wouldn’t report on its displeasure with the deprivation while I did so.
water ~ few days food ~ a few weeks sleep ~ a few days social contact ~ a handful of hours
Water depends on temperature, food on exertion both mental and physical. I speculate if the context influenced the rate of depletion in similar manner.
I very intentionally had qualifiers a-many in my comment to try and make it apparent that I wasn’t “judging” Alicorn. “I cannot imagine” is perhaps the wrong phrase. “I find it hard to imagine” would be better, I think.
Perhaps I’m crazy, but I don’t think pointing out the disadvantages of the way someone thinks/feels is or should be objectionable.
If someone differs from me in what kind of vegetables taste good, or if they like dry humor, or whatever, I’m not going to try and tell them they may want to rethink their position. There’s no salient disadvantages to those sort of things.
If Alicorn had said, “I really prefer human contact and I just get a little uncomfortable without it after 5 hours” I wouldn’t have even brought it up.
If someone has a trait that does have particular disadvantages, I just don’t see how discussing it with them is objectionable.
Perhaps the person to say whether it’s objectionable would be Alicorn. :)
I also think it’s extremely disproportionate to die because the old friends are gone. A post FAI world would be a Nice Enough Place that they will not even remotely mistreat you and you will not remotely regret your signing up.
If you’re talking about how I have no prior experience with revival, all I can say is that I have to make plans for the future based on what predictions (however poor) I can make now. If you’re talking about how I was born and that turned out okay, I have… y’know.. parents.
If you’re talking about how I was born and that turned out okay, I have… y’know.. parents.
For many people, parents are a neutral or net negative presence. But alright.
If you had to choose between being born to an orphanage and not being born—a situation which is symmetrical as far as I can see to your objection to cryonics—would you choose to not be born?
That depends on the circumstances which would have led to me being born to an orphanage. If somebody is going around creating people willy-nilly out of genetic material they found lying around, uh, no, please stop them, I’d be okay with not having been born. If I’m an accident and happened to have a pro-life mother in this hypothetical… well, the emphasis in pro-choice is “choice”, so in that case it depends whether someone would swoop in and prevent my birth against her will or whether she would change her mind. In the latter case, the abortion doctor has my blessing. In the former case, (s)he hasn’t, but only because I don’t think medically elective surgery should be performed on unwilling patients, not because I think the lives of accidental fetuses are particularly valuable. If I was conceived by a stable, loving, child-wanting couple and my hypothetical dad was hit by a bus during my gestation and my mom died in childbirth, then I’d be okay with being born as opposed to not being born.
Yeah. Even though a couple of them have expressed interest, there is a huge leap from being interested to actually signing up.
This is my present plan. We’ll see if it works.
I’m not willing to bet on this.
I do not want my brain messed with. If I expected to arrive in a future that would mess with my brain without my permission, I would not want to go there.
I have to say, if 3 fails, I would tend to downvote that future pretty strongly. We seem to have very different ideas of what a revival-world will and should look like, conditional on revival working at all.
I was including a “promptly enough” in the “will make friends” thing. I’m sure that, if I could stay alive and sane long enough, I’d make friends. I don’t think I could stay alive and sane and lonely long enough to make close enough friends without my brain being messed with (not okay) or me being forcibly prevented from offing myself (not fond of this either).
If your life were literally at stake and I were a Friendly AI, I bet I could wake you up next to someone who could become fast friends with you within five hours. It doesn’t seem like a weak link in the chain, let alone the weakest one.
It is the most terrifying link in the chain. Most of the other links, if they break, just look like a dead Alicorn, not a dead Alicorn who killed herself in a fit of devastating, miserable starvation for personal connection.
If you thought it was reasonably likely that, given the success of cryonics, you’d be obliged to live without something you’d presently feel suicidal without (I’m inclined to bring up your past analogy of sex and heroin fix here, but substitute whatever works for you), would you be so gung-ho?
I could sorta understand this if we were talking about one person you couldn’t live without, it’s the idea of worrying about not having any deep friends in general that’s making me blink.
Some people are convinced they’ll have to live without the strangest things after the Singularity… having encountered something possibly similar before, I do seriously wonder if you might be suffering from a general hope-in-the-future deficiency.
If you were the friendly AI and Alicorn failed to make a fast friend as predicted and that resulted in suicidal depression, would that depression be defined as mental illness and treated as such? Would recent wake-ups have the right to commit suicide? I think that’s an incredibly hard question so please don’t answer if you don’t want to.
Have you written anything on suicide in the metaethics sequence or elsewhere?
I suppose having to rigorously prove the mathematics behind these questions is why Eliezer is so much more pessimistic about the probability of AI killing us than I am.
I have only managed to live without particular persons who’ve departed from my life for any reason by virtue of already having other persons to console me.
That said, there are a handful of people whose loss would trouble me especially terribly, but I could survive it with someone else around to grieve with.
I would think that the corporate reviving you would be either a foundation of your family, a general charity organization or a fan club of yours (Don’t laugh! There are fan clubs for super stars in India. Extend it further in the future and each LW commentor might have a fan club.) Since you will be, relatively speaking, an early adopter of cryonics, you will be relatively, a late riser. Cryonics goes LIFO, if I understand it correctly.
I’m pretty sure now that your fears are explicitly stated in a public forum, they are on the record for almost all eternity and they will be given sufficient consideration by those reviving you.
Eliezer has already presented one solution. A make-do best friend who can be upgraded to sentience whenever need be.
A simpler solution will be a human child, holding your palm and saying “I’m your great great grand child”. Are you still sure you’ll still not care enough? (Dirty mind hack, I understand, but terribly easy to implement)
I’m pretty sure now that your fears are explicitly stated in a public forum, they are on the record for almost all eternity and they will be given sufficient consideration by those reviving you.
Probably worth backing up though, in the form of a stone tablet adjacent to your body.
Alcor do keep some of your stuff in a secret location, but given problems with data retrieval from old media it might be good if they offered an explicit service to store your data—which I’d expect them to defer to providers like Amazon, but handle the long-term problems of moving to new providers as the need arises, and of decryption only on revival.
I would take the “I’m your great great grandchild” solution in a heartbeat—but I do not already have children, and something could still come up to prevent me from having them (and hence great great grandchildren).
If you’d take that solution, why not a great great … great grand niece? Or distant cousin? Any human child of that time will be related to you at some remove.
My sister doesn’t have children yet either, and may or may not in the future. It does matter if they’re a relation I’d ever be disposed to see at Christmas, which has historically bottomed out with second cousins.
It does matter if they’re a relation I’d ever be disposed to see at Christmas
Then it looks like I misunderstood. Say you have a child, then get preserved (though no one else you know does). Then say you wake up, it’s 500 years in the future, and you meet your great (great … great) great grandchild, someone you would never have seen at Christmas otherwise. Would this satisfy you?
If so, then you don’t have to worry. You will have relatives alive when you’re revived. Even if they’re descendants of cousins or second cousins. And since it will be 500 years in the future, you are equally likely to see your cousin’s 2510 descendant and your 2510 descendant at Christmas (that is, not at all).
If I had a child, I’d sign up me and said child simultaneously—problem solved right there. There’s no need to postulate any additional descendants to fix my dilemma.
I can’t get enthusiastic about second cousins 30 times removed. I wouldn’t expect to have even as much in common with them as I have in common with my second cousins now (with whom I can at least swap reminisces about prior Christmases and various relatives when the situation calls for it).
I can’t guarantee it, no, but I can be reasonably sure—someone signed up from birth (with a parent) would not have the usual defense mechanisms blocking the idea.
Then why can you get enthusiastic about a great great grandchild born after you get frozen?
I can usually think about something enough and change my feelings about it through reason.
For example, if I thought “direct descent seems special”, I could think about all the different ideas like the questions Blueberry asks and change my actual emotions about the subject.
I suspect this comes from my guilty pleasure...I glee at biting-the-bullet.
If you want make friends with cryonicists, sign up. For every one person I meet who is signed up, I hear excuses from ten others: It won’t work. It will work but I could be revived and tortured by an evil AI. The freezing process could cause insanity. It’ll probably work but I’ve been too lazy to sign up. I’m so needy I’ll kill myself without friends. Etc.
Wow, calling me names has made me really inclined to take advice from you. I’ll get right on that, since you’re so insightful about my personal qualities and must know the best thing to do in this case, too.
Are you supposed to be the extrovert in the ‘extrovert hell’ scenario? Extroverts generally don’t have trouble finding new friends, or fear a situation where they find themselves surrounded by strangers.
I’m the extrovert, yes. In the sense of needing people, not in the sense of finding them easy to be around (I have a friend who finds it fantastically amusing to call herself a social introvert and me an antisocial extrovert, which is a fair enough description). I actually get very little value from interacting with strangers, especially in large groups. I need people who I’m reasonably close to in order to accomplish anything, and that takes some time to build up to. None of my strategies for making new friends will be present in a no-pre-reviv-friends-or-family wake-up scenario.
I actually get very little value from interacting with strangers, especially in large groups. I need people who I’m reasonably close to in order to accomplish anything
If the choice were available, would you change any of that?
I think that would depend heavily on the mechanism by which it’d be changed. I’d try cognitive exercises or something to adjust the value I get from strangers and large groups; I don’t want to be drugged.
I think of an extrovert as someone who recharges by being around other people, and an introvert as someone who recharges by being alone, regardless of social proclivity or ability.
“I make new friends easily” is one of the standard agree/disagree statements used to test for extraversion which is why I find this usage a little unusual.
No, it seems Alicorn’s usage of extrovert is valid. It is just not what I’d previously understood by the word. The ‘makes friends easily’ part of extrovert is the salient feature of extraversion for me.
It’s all on an introvert/extrovert test, but to me the salient feature of extroversion is finding interaction with others energizing and finding being alone draining. Introverts find it tiring to interact with others and they find being alone energizing, on a continuous spectrum.
I fall in the dead center on an introvert/extrovert test; I’m not sure how uncommon that is.
I’m pretty sure you will have friends and relatives living in 2070. Do you think it’ll be more than 60 years before cryonics patients are revived? Do you think it’ll be more than 60 years before we can reverse aging?
...This has nothing to do with embarrassment. The problem isn’t that people will stop being my friend over it, the problem is that they will all die and then the best case scenario will be that I will wake up in a bright new future completely alone.
I’m actually still confused. That doesn’t sound like ‘Extrovert Hell’. Extroverts would just make a ton of new friends straight away. A lone introvert would have more trouble. Sure, it would be an Extrovert Very Distressing Two Weeks, but death is like that. (Adjust ‘two weeks’ to anything up to a decade depending on how vulnerable to depression you believe you will be after you are revived.)
I honestly do not think I’d last two weeks. If I go five conscious hours without having a substantial conversation with somebody I care about, I feel like I got hit by a brick wall. I’m pretty sure I only survived my teens because I had a pesky sister who prevented me from spending too long in psychologically self-destructive seclusion.
This sounds like an unrealistically huge discount rate. To be precise, you anticipate:
(a) One week of being really unhappy while you go through the process of making new friends (perhaps with someone else who’s really unhappy for similar reasons). I assume here that you do not find the process of “making a new friend” to be itself enjoyable enough to compensate. I also suspect that you would start getting over the psychological shock almost immediately, but let’s suppose it actually does take until you’ve made a friend deep enough to have intimate conversations with, and let’s suppose that this does take a whole week.
(b) N years of living happily ever after.
It’s really hard to see how the former observer-moments outweigh the latter observer-moments.
I think it’s this that commenters are probably trying to express when they wonder if you’re thinking in the mode we name “rational”: it seems more like a decision made by mentally fleeing from the sheer terror of imagining the worst possible instant of the worst possible scenario, than any choice made by weighing and balancing.
I also tend to think of cryonics as a prophylactic for freak occurrences rather than inevitable death of old age, meaning that if you sign up now and then have to get suspended in the next 10 years for some reason, I’d rate a pretty good chance that you wake up before all your friends are dead of old age. But that shouldn’t even be an issue. As soon as you weigh a week against N years, it looks pretty clear that you’re not making your decision around the most important stakes in the balance.
I know you don’t endorse consequentialism, but it seems to me that this is just exactly the sort of issue where careful verbal thinking really does help people in real life, a lot—when people make decisions by focusing on one stake that weighs huge in their thoughts but obviously isn’t the most important stake, where here the stakes are “how I (imagine) feeling in the very first instant of waking up” versus “how I feel for the rest of my entire second life”. Deontologist or not, I don’t see how you could argue that it would be a better world for everyone if we all made decisions that way. Once you point it out, it just seems like an obvious bias—for an expected utility maximizer, a formal bias; but obviously wrong even in an informal sense.
I think that the distress would itself inhibit me in my friend-making attempts. It is a skill that I have to apply, not a chemical reaction where if you put me in a room with a friendly stranger and stir, poof, friendship.
Um… would I deeply offend you if I suggested that, perhaps, your worst fears and nightmares are not 100% reflective of what would actually happen in reality? I mean, what you’re saying here is that if you wake up without friends, you’ll be so shocked and traumatized that you’ll never make any friends again ever, despite any future friend-finding or friend-making-prediction software that could potentially be brought to bear. You’re saying that your problem here is unsolvable in the long run by powers up to and including Friendly superintelligence and it just doesn’t seem like THAT LEVEL of difficulty. Or you’re saying that the short-run problem is so terrible, so agonizing, that no amount of future life and happiness can compensate for it, and once again it just doesn’t seem THAT BAD. And I’ve already talked about how pitting verbal thought against this sort of raw fear really is one of those places where rationality excels at actually improving our lives.
Are you sure this is your true rejection or is there something even worse waiting in the wings?
I’m making projections based on psychological facts about myself. Anticipating being friendless and alone makes me unhappy all by itself; but I do have some data on how I get when it actually happens. I don’t think I would be able to bring to bear these clever solutions if that happened (to the appropriate greater magnitude).
I do consider this a problem, so I am actively trying to arrange to have someone I’d find suitable signed up (in either direction would work for). This is probably a matter of time, since my top comment here did yield responses. I’d bet you money, if you like, that (barring financial disaster on my part) I’ll be signed up within the next two years.
I asked this elsewhere, but I’ll ask again: what if the unhappiness and distress caused by the lack of friends could suddenly just disappear? If you could voluntarily suppress it, or stop suppressing it? There will almost certainly be technology in a post-revival future to let you do that, and you could wake up with that ability already set up.
This is an interesting point to consider, and I’m one who’s offered a lot of reasons to not sign up for cryonics.
For the record, a lower bound on my “true rejection” is “I’d sign up if it was free”.
What about this: leave instructions with your body to not revive you until there is technology that would allow you to temporarily voluntarily suppress your isolation anxiety until you got adjusted to the new situation and made some friends.
If you don’t like how extraverted you are, you don’t have to put up with it after you get revived.
But the availability of such technology would not coincide with my volunteering to use it.
Would you be opposed to using it? Would you be opposed to not returning to consciousness until the technology had been set up for you (i.e. installed in your mind), so it would be immediately available?
I assign a negligible probability that there exists some way I’d find acceptable of achieving this result. It sounds way creepy to me.
I find that surprising. (I don’t mean to pass judgment at all. Values are values.) Would you call yourself a transhumanist? I wonder how many such people have creepy feelings about mind modifications like that. I would have thought it’s pretty small, but now I’m not sure. I wonder if reading certain fiction tends to change that attitude.
I would call myself a transhumanist, yes. Humans suck, let’s be something else—but I would want such changes to myself to be very carefully understood by me first, and if at all possible, directed from the inside. I mentioned elsewhere that I’d try cognitive exercises if someone proposed them. Brain surgery or drugs or equivalents, though, I am not open to without actually learning what the heck they’d entail (which would take more than the critical time period absent other unwelcome intervention), and these are the ones that seem captured by “technology”.
Hmm. What I had in mind isn’t something I would call brain surgery. It would be closer to a drug. My idea (pretty much an “outlook” from Egan’s Diaspora) is that your mind would be running in software, in a huge neuron simulator, and that the tech would simply inhibit the output of certain, targeted networks in your brain or enhance others. This would obviously be much more targeted than inert drugs could achieve. (I guess you might be able to achieve this in a physical brain with nanotech.)
I’m not sure if this changes your intuition any. Perhaps you would still be uncomfortable with it without understanding it first. But if you trust the people who would be reviving you to not torture and enslave you, you could conceivably leave enough detailed information about your preferences for you to trust them as a first-cut proxy on the mind modification decision. (Though that could easily be infeasible.) Or perhaps you could instruct them to extrapolate from your brain whether you would eventually approve of the modification, if the extrapolation wouldn’t create a sentient copy of you. (I’m not sure if that’s possible, but it might be.)
I trust the inhabitants of the future not to torture and enslave me. I don’t trust them not to be well-intentioned evil utilitarians who think nothing of overriding my instructions and preferences if that will make me happy. So I’d like to have the resources to be happy without anybody having to be evil to me.
But that wouldn’t be making you happy. It’d be making someone very much like you happy, but someone you wouldn’t have ever matured into. (You may still care that the latter person isn’t created, or not want to pay for cryonics just for the latter person to be created; that’s not the point.) I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you. Do you think they would prevent someone from committing suicide? If they would make unwanted modifications to you before reviving you, why wouldn’t they be willing to make modifications to unconsenting living people*? They would see your “do not revive unless...” instructions as a suicide note.
* Perhaps because they view you as a lower life form for which more paternalism is warranted than for normal transhuman.
Of course that’s not a strong argument. If you want to be that cautious, you can.
I don’t. I wouldn’t be very surprised to wake up modified in some popular way. I’m protecting the bits of me that I especially want safe.
Maybe.
Who says they’re not? (Or: Maybe living people are easier to convince.)
How about a scenario where they gave you something equivalent to a USB port, and the option to plug in an external, trivially removable module that gave you more conscious control over your emotional state but didn’t otherwise affect your emotions? That still involves brain surgery (to install the port), but it doesn’t really seem to be in the same category as current brain surgery at all.
Hmmm. That might work. However, the ability to conceptualize one way to achieve the necessary effect doesn’t guarantee that it’s ever going to be technically feasible. I can conceptualize various means of faster-than-light travel, too; it isn’t obliged to be physically possible.
I suspect I have a more complete and reality-connected model of how such a system might work than you have of ftl. :)
I’m basically positing a combination of more advanced biofeedback and non-pleasure-center-based wireheading, for the module: You plug it in, and it starts showing you readings for various systems, like biofeedback does, so that you can pinpoint what’s causing the problem on a physical level. Actually using the device would stimulate relevant brain-regions, or possibly regulate more body-based components of emotion like heart- and breathing-rate and muscle tension (via the brain regions that normally do that), or both.
I’m also assuming that there would be considerable protection against accidentally stimulating either the pleasure center or the wanting center, to preclude abuse, if they even make those regions stimulateable in the first place.
Of course I know how FTL works! It involves hyperspace! One gets there via hyperdrive! Then one can get from place to place hyper-fast! It’s all very hyper!
*ahem*
You have a point. But my more emotionally satisfying solution seems to be fairly promising. I’ll turn this over in my head more and it may serve as a fallback.
Wow. That isn’t an exaggerating? Is that what normal extraverts are like or are you an outlier. So hard to imagine.
That seems like a fairly extreme outlier to me. I’m an extrovert, and for me that appears to mean simply that I prefer activities in which I interact with people to activities where I don’t interact with people.
Nope, not exaggerating. I say “five hours” because I timed it. I don’t know if I’m an outlier or not; most of my friends are introverts themselves.
Sounds like “five hours” might be something worth the pain of practicing to extend. Maybe not for you, but outlier time-brittle properties like that in me worry me.
Refraining from pushing the five hour limit harder than I have to is a very important part of my mood maintenance, which lets me not be on drugs, in danger of hurting myself, or just plain unhappy all the time. The farther I let myself get, the harder it is to muster the motivation to use my recovery strategies, and the longer they take to work.
From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.
I love having a good conversation with a friend. But I could also probably go for weeks without having such a thing. Probably the longest I’ve been alone is a week and I enjoyed it.
I can’t see from your viewpoint, but from my viewpoint you should do everything in your power to change how reliant you are on others. It seems like if you are so reliant on others that you are going to, consciously or not, change your values and beliefs merely to ensure that you have people who you can associate with.
I’m dependent on many things, and the ability to chat with people is one of the easiest to ensure among them. If I decide that I’m too dependent on external factors, I think I’ll kick the brie habit before I try to make my friends unnecessary.
I’m not sure whence your concern that I’ll change my values and beliefs to ensure that I have people I can associate with. I’d consider it really valuable evidence that something was wrong with my values and beliefs if nobody would speak to me because of them. That’s not the case—I have plenty of friends and little trouble making more when the opportunity presents itself—so I’m not sure why my beliefs and values might need to shift to ensure my supply.
Perhaps I misunderstood what your “dependency” actually is. If your dependency was that you really need people to approve of you (a classic dependency and the one I apparently wrongly assumed), then it seems like your psyche is going to be vastly molded by those around you.
If your dependency is one of human contact, than the pressure to conform would probably me much less of a thing to worry about.
I would like to address your first paragraph...”making your friends unnecessary” isn’t what I suggested. What I had in mind was making them not so necessary that you have to have contact with them every few hours.
Anyway, it’s all academic now, because if you don’t think it’s a problem, I certainly don’t think it’s a problem.
ETA: I did want to point out that I have changed over time. During my teenage years I was constantly trying to be popular and get others to like me. Now, I’m completely comfortable with being alone and others thinking I’m wrong or weird.
Well, I like approval. But for the purposes of not being lonely, a heated argument will do!
If you cannot so imagine then perhaps making judgements on what is ‘unhealthy’ for a person that does rely so acutely on others may not be entirely reliable. If someone clearly has a different neurological makeup it can be objectionable to either say they should act as you do or that they should have a different neurological makeup.
It is absolutely fascinating to me to see the ‘be more like me’ come from the less extroverted to the extrovert.
Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill. The fact that I need contact has made me, through sheer desperation and resulting time devoted to practice, okay at getting contact; but that’s something that was forced, not enabled, by my being an extrovert.
Definitely. It could get you killed. It had me wondering, for example, if the ~5 hours figure is highly context dependent: You are on a hike with a friend and 12 hours from civilisation. Your friend breaks a leg. He is ok, but unable to move far and in need of medical attention. You need to get help. Does the fact that every step you take is bound up in your dear friend’s very survival help at all? Or is the brain like “No! Heroic symbolic connection sucks. Gimme talking or physical intimacy now. 5 hours I say!”? (No offence meant by mentioning a quirk of your personality as a matter of speculative curiosity. I just know the context and nature of isolation does make a difference to me, even though it takes around 5 weeks for such isolation to cause noticeable degradation of my sanity.)
If it was my handicap I would be perfectly fine with an FAI capping any distress at, say, the level you have after 3 hours. Similarly, if I was someone who was unable to endure 5 consecutive hours of high stimulus social exposure without discombobulating I would want to have that weakness removed. But many people object to being told that their natural state is unhealthy or otherwise defective and in need of repair and I consider that objection a valid one.
I would certainly endure the discomfort involved in saving my friend in the scenario you describe. I’d do the same thing if saving my friend involved an uncomfortable but non-fatal period of time without, say, water, food, or sleep. That doesn’t mean my brain wouldn’t report on its displeasure with the deprivation while I did so.
water ~ few days
food ~ a few weeks
sleep ~ a few days
social contact ~ a handful of hours
Water depends on temperature, food on exertion both mental and physical. I speculate if the context influenced the rate of depletion in similar manner.
I very intentionally had qualifiers a-many in my comment to try and make it apparent that I wasn’t “judging” Alicorn. “I cannot imagine” is perhaps the wrong phrase. “I find it hard to imagine” would be better, I think.
Perhaps I’m crazy, but I don’t think pointing out the disadvantages of the way someone thinks/feels is or should be objectionable.
If someone differs from me in what kind of vegetables taste good, or if they like dry humor, or whatever, I’m not going to try and tell them they may want to rethink their position. There’s no salient disadvantages to those sort of things.
If Alicorn had said, “I really prefer human contact and I just get a little uncomfortable without it after 5 hours” I wouldn’t have even brought it up.
If someone has a trait that does have particular disadvantages, I just don’t see how discussing it with them is objectionable.
Perhaps the person to say whether it’s objectionable would be Alicorn. :)
I also think it’s extremely disproportionate to die because the old friends are gone. A post FAI world would be a Nice Enough Place that they will not even remotely mistreat you and you will not remotely regret your signing up.
Because the last time you woke up in a brand-new world with no friends turned out so badly?
If you’re talking about how I have no prior experience with revival, all I can say is that I have to make plans for the future based on what predictions (however poor) I can make now. If you’re talking about how I was born and that turned out okay, I have… y’know.. parents.
For many people, parents are a neutral or net negative presence. But alright.
If you had to choose between being born to an orphanage and not being born—a situation which is symmetrical as far as I can see to your objection to cryonics—would you choose to not be born?
That depends on the circumstances which would have led to me being born to an orphanage. If somebody is going around creating people willy-nilly out of genetic material they found lying around, uh, no, please stop them, I’d be okay with not having been born. If I’m an accident and happened to have a pro-life mother in this hypothetical… well, the emphasis in pro-choice is “choice”, so in that case it depends whether someone would swoop in and prevent my birth against her will or whether she would change her mind. In the latter case, the abortion doctor has my blessing. In the former case, (s)he hasn’t, but only because I don’t think medically elective surgery should be performed on unwilling patients, not because I think the lives of accidental fetuses are particularly valuable. If I was conceived by a stable, loving, child-wanting couple and my hypothetical dad was hit by a bus during my gestation and my mom died in childbirth, then I’d be okay with being born as opposed to not being born.
If you don’t like being alone in the bright new future you can always off yourself.
Or try to make friends with other recently-revived cryonicists. That’s what extroverts are good at, right?
That would be a fine way to spend money, wouldn’t it, paying them to not let me die only for me to predictably undo their work?
My comment about suicide was a joke to contrast my recommendation: make friends.
I think you assign high probability to all of the following:
None of your current friends will ever sign up for cryonics.
You won’t make friends with any current cryonicists.
You won’t make friends after being revived.
Your suicidal neediness will be incurable by future medicine.
Please correct me if I’m wrong. If you think any of those are unlikely and you think cryonics will work, then you should sign up by yourself.
Yeah. Even though a couple of them have expressed interest, there is a huge leap from being interested to actually signing up.
This is my present plan. We’ll see if it works.
I’m not willing to bet on this.
I do not want my brain messed with. If I expected to arrive in a future that would mess with my brain without my permission, I would not want to go there.
I have to say, if 3 fails, I would tend to downvote that future pretty strongly. We seem to have very different ideas of what a revival-world will and should look like, conditional on revival working at all.
I was including a “promptly enough” in the “will make friends” thing. I’m sure that, if I could stay alive and sane long enough, I’d make friends. I don’t think I could stay alive and sane and lonely long enough to make close enough friends without my brain being messed with (not okay) or me being forcibly prevented from offing myself (not fond of this either).
If your life were literally at stake and I were a Friendly AI, I bet I could wake you up next to someone who could become fast friends with you within five hours. It doesn’t seem like a weak link in the chain, let alone the weakest one.
It is the most terrifying link in the chain. Most of the other links, if they break, just look like a dead Alicorn, not a dead Alicorn who killed herself in a fit of devastating, miserable starvation for personal connection.
If you thought it was reasonably likely that, given the success of cryonics, you’d be obliged to live without something you’d presently feel suicidal without (I’m inclined to bring up your past analogy of sex and heroin fix here, but substitute whatever works for you), would you be so gung-ho?
I could sorta understand this if we were talking about one person you couldn’t live without, it’s the idea of worrying about not having any deep friends in general that’s making me blink.
Some people are convinced they’ll have to live without the strangest things after the Singularity… having encountered something possibly similar before, I do seriously wonder if you might be suffering from a general hope-in-the-future deficiency.
PS/Edit: Spider Robinson’s analogy, not mine.
If you were the friendly AI and Alicorn failed to make a fast friend as predicted and that resulted in suicidal depression, would that depression be defined as mental illness and treated as such? Would recent wake-ups have the right to commit suicide? I think that’s an incredibly hard question so please don’t answer if you don’t want to.
Have you written anything on suicide in the metaethics sequence or elsewhere?
And the relevant question extends to the assumption behind the phrase ‘and treated as such’. Do people have the right to be nuts in general?
I suppose having to rigorously prove the mathematics behind these questions is why Eliezer is so much more pessimistic about the probability of AI killing us than I am.
I have only managed to live without particular persons who’ve departed from my life for any reason by virtue of already having other persons to console me.
That said, there are a handful of people whose loss would trouble me especially terribly, but I could survive it with someone else around to grieve with.
I would think that the corporate reviving you would be either a foundation of your family, a general charity organization or a fan club of yours (Don’t laugh! There are fan clubs for super stars in India. Extend it further in the future and each LW commentor might have a fan club.) Since you will be, relatively speaking, an early adopter of cryonics, you will be relatively, a late riser. Cryonics goes LIFO, if I understand it correctly.
I’m pretty sure now that your fears are explicitly stated in a public forum, they are on the record for almost all eternity and they will be given sufficient consideration by those reviving you.
Eliezer has already presented one solution. A make-do best friend who can be upgraded to sentience whenever need be.
A simpler solution will be a human child, holding your palm and saying “I’m your great great grand child”. Are you still sure you’ll still not care enough? (Dirty mind hack, I understand, but terribly easy to implement)
Probably worth backing up though, in the form of a stone tablet adjacent to your body.
Alcor do keep some of your stuff in a secret location, but given problems with data retrieval from old media it might be good if they offered an explicit service to store your data—which I’d expect them to defer to providers like Amazon, but handle the long-term problems of moving to new providers as the need arises, and of decryption only on revival.
I would take the “I’m your great great grandchild” solution in a heartbeat—but I do not already have children, and something could still come up to prevent me from having them (and hence great great grandchildren).
If you’d take that solution, why not a great great … great grand niece? Or distant cousin? Any human child of that time will be related to you at some remove.
My sister doesn’t have children yet either, and may or may not in the future. It does matter if they’re a relation I’d ever be disposed to see at Christmas, which has historically bottomed out with second cousins.
Then it looks like I misunderstood. Say you have a child, then get preserved (though no one else you know does). Then say you wake up, it’s 500 years in the future, and you meet your great (great … great) great grandchild, someone you would never have seen at Christmas otherwise. Would this satisfy you?
If so, then you don’t have to worry. You will have relatives alive when you’re revived. Even if they’re descendants of cousins or second cousins. And since it will be 500 years in the future, you are equally likely to see your cousin’s 2510 descendant and your 2510 descendant at Christmas (that is, not at all).
If I had a child, I’d sign up me and said child simultaneously—problem solved right there. There’s no need to postulate any additional descendants to fix my dilemma.
I can’t get enthusiastic about second cousins 30 times removed. I wouldn’t expect to have even as much in common with them as I have in common with my second cousins now (with whom I can at least swap reminisces about prior Christmases and various relatives when the situation calls for it).
You can’t guarantee that your child will go through with it, even if you sign em up.
Then why can you get enthusiastic about a great great grandchild born after you get frozen?
I can’t guarantee it, no, but I can be reasonably sure—someone signed up from birth (with a parent) would not have the usual defense mechanisms blocking the idea.
Direct descent seems special to me.
I find this thread fascinating.
I can usually think about something enough and change my feelings about it through reason.
For example, if I thought “direct descent seems special”, I could think about all the different ideas like the questions Blueberry asks and change my actual emotions about the subject.
I suspect this comes from my guilty pleasure...I glee at biting-the-bullet.
Is this not the case with you?
I do not have a reliable ability to change my emotional reactions to things in a practically useful time frame.
If you want make friends with cryonicists, sign up. For every one person I meet who is signed up, I hear excuses from ten others: It won’t work. It will work but I could be revived and tortured by an evil AI. The freezing process could cause insanity. It’ll probably work but I’ve been too lazy to sign up. I’m so needy I’ll kill myself without friends. Etc.
It gets old really fast.
Wow, calling me names has made me really inclined to take advice from you. I’ll get right on that, since you’re so insightful about my personal qualities and must know the best thing to do in this case, too.
Are you supposed to be the extrovert in the ‘extrovert hell’ scenario? Extroverts generally don’t have trouble finding new friends, or fear a situation where they find themselves surrounded by strangers.
I’m the extrovert, yes. In the sense of needing people, not in the sense of finding them easy to be around (I have a friend who finds it fantastically amusing to call herself a social introvert and me an antisocial extrovert, which is a fair enough description). I actually get very little value from interacting with strangers, especially in large groups. I need people who I’m reasonably close to in order to accomplish anything, and that takes some time to build up to. None of my strategies for making new friends will be present in a no-pre-reviv-friends-or-family wake-up scenario.
If the choice were available, would you change any of that?
I think that would depend heavily on the mechanism by which it’d be changed. I’d try cognitive exercises or something to adjust the value I get from strangers and large groups; I don’t want to be drugged.
Hmm, ok. I’d say you’re using ‘extrovert’ in a fairly non-standard way but I think I understand what you’re saying now.
I think of an extrovert as someone who recharges by being around other people, and an introvert as someone who recharges by being alone, regardless of social proclivity or ability.
“I make new friends easily” is one of the standard agree/disagree statements used to test for extraversion which is why I find this usage a little unusual.
But it’s not the only agree/disagree statement on the test, right?
No, it seems Alicorn’s usage of extrovert is valid. It is just not what I’d previously understood by the word. The ‘makes friends easily’ part of extrovert is the salient feature of extraversion for me.
It’s all on an introvert/extrovert test, but to me the salient feature of extroversion is finding interaction with others energizing and finding being alone draining. Introverts find it tiring to interact with others and they find being alone energizing, on a continuous spectrum.
I fall in the dead center on an introvert/extrovert test; I’m not sure how uncommon that is.
(Although naturally there tends to be a correlation with the latter two.)
Maybe you could specify that you only want to be revived if some of your friends are alive.
I’ll certainly do that on signup; but if I don’t think that condition will ever obtain, it’d be a waste.
I’m pretty sure you will have friends and relatives living in 2070. Do you think it’ll be more than 60 years before cryonics patients are revived? Do you think it’ll be more than 60 years before we can reverse aging?
I think it is reasonably likely that those tasks will take longer than that, yes.