I honestly do not think I’d last two weeks. If I go five conscious hours without having a substantial conversation with somebody I care about, I feel like I got hit by a brick wall. I’m pretty sure I only survived my teens because I had a pesky sister who prevented me from spending too long in psychologically self-destructive seclusion.
This sounds like an unrealistically huge discount rate. To be precise, you anticipate:
(a) One week of being really unhappy while you go through the process of making new friends (perhaps with someone else who’s really unhappy for similar reasons). I assume here that you do not find the process of “making a new friend” to be itself enjoyable enough to compensate. I also suspect that you would start getting over the psychological shock almost immediately, but let’s suppose it actually does take until you’ve made a friend deep enough to have intimate conversations with, and let’s suppose that this does take a whole week.
(b) N years of living happily ever after.
It’s really hard to see how the former observer-moments outweigh the latter observer-moments.
I think it’s this that commenters are probably trying to express when they wonder if you’re thinking in the mode we name “rational”: it seems more like a decision made by mentally fleeing from the sheer terror of imagining the worst possible instant of the worst possible scenario, than any choice made by weighing and balancing.
I also tend to think of cryonics as a prophylactic for freak occurrences rather than inevitable death of old age, meaning that if you sign up now and then have to get suspended in the next 10 years for some reason, I’d rate a pretty good chance that you wake up before all your friends are dead of old age. But that shouldn’t even be an issue. As soon as you weigh a week against N years, it looks pretty clear that you’re not making your decision around the most important stakes in the balance.
I know you don’t endorse consequentialism, but it seems to me that this is just exactly the sort of issue where careful verbal thinking really does help people in real life, a lot—when people make decisions by focusing on one stake that weighs huge in their thoughts but obviously isn’t the most important stake, where here the stakes are “how I (imagine) feeling in the very first instant of waking up” versus “how I feel for the rest of my entire second life”. Deontologist or not, I don’t see how you could argue that it would be a better world for everyone if we all made decisions that way. Once you point it out, it just seems like an obvious bias—for an expected utility maximizer, a formal bias; but obviously wrong even in an informal sense.
I think that the distress would itself inhibit me in my friend-making attempts. It is a skill that I have to apply, not a chemical reaction where if you put me in a room with a friendly stranger and stir, poof, friendship.
Um… would I deeply offend you if I suggested that, perhaps, your worst fears and nightmares are not 100% reflective of what would actually happen in reality? I mean, what you’re saying here is that if you wake up without friends, you’ll be so shocked and traumatized that you’ll never make any friends again ever, despite any future friend-finding or friend-making-prediction software that could potentially be brought to bear. You’re saying that your problem here is unsolvable in the long run by powers up to and including Friendly superintelligence and it just doesn’t seem like THAT LEVEL of difficulty. Or you’re saying that the short-run problem is so terrible, so agonizing, that no amount of future life and happiness can compensate for it, and once again it just doesn’t seem THAT BAD. And I’ve already talked about how pitting verbal thought against this sort of raw fear really is one of those places where rationality excels at actually improving our lives.
Are you sure this is your true rejection or is there something even worse waiting in the wings?
I’m making projections based on psychological facts about myself. Anticipating being friendless and alone makes me unhappy all by itself; but I do have some data on how I get when it actually happens. I don’t think I would be able to bring to bear these clever solutions if that happened (to the appropriate greater magnitude).
I do consider this a problem, so I am actively trying to arrange to have someone I’d find suitable signed up (in either direction would work for). This is probably a matter of time, since my top comment here did yield responses. I’d bet you money, if you like, that (barring financial disaster on my part) I’ll be signed up within the next two years.
I asked this elsewhere, but I’ll ask again: what if the unhappiness and distress caused by the lack of friends could suddenly just disappear? If you could voluntarily suppress it, or stop suppressing it? There will almost certainly be technology in a post-revival future to let you do that, and you could wake up with that ability already set up.
What about this: leave instructions with your body to not revive you until there is technology that would allow you to temporarily voluntarily suppress your isolation anxiety until you got adjusted to the new situation and made some friends.
If you don’t like how extraverted you are, you don’t have to put up with it after you get revived.
Would you be opposed to using it? Would you be opposed to not returning to consciousness until the technology had been set up for you (i.e. installed in your mind), so it would be immediately available?
I find that surprising. (I don’t mean to pass judgment at all. Values are values.) Would you call yourself a transhumanist? I wonder how many such people have creepy feelings about mind modifications like that. I would have thought it’s pretty small, but now I’m not sure. I wonder if reading certain fiction tends to change that attitude.
I would call myself a transhumanist, yes. Humans suck, let’s be something else—but I would want such changes to myself to be very carefully understood by me first, and if at all possible, directed from the inside. I mentioned elsewhere that I’d try cognitive exercises if someone proposed them. Brain surgery or drugs or equivalents, though, I am not open to without actually learning what the heck they’d entail (which would take more than the critical time period absent other unwelcome intervention), and these are the ones that seem captured by “technology”.
Hmm. What I had in mind isn’t something I would call brain surgery. It would be closer to a drug. My idea (pretty much an “outlook” from Egan’s Diaspora) is that your mind would be running in software, in a huge neuron simulator, and that the tech would simply inhibit the output of certain, targeted networks in your brain or enhance others. This would obviously be much more targeted than inert drugs could achieve. (I guess you might be able to achieve this in a physical brain with nanotech.)
I’m not sure if this changes your intuition any. Perhaps you would still be uncomfortable with it without understanding it first. But if you trust the people who would be reviving you to not torture and enslave you, you could conceivably leave enough detailed information about your preferences for you to trust them as a first-cut proxy on the mind modification decision. (Though that could easily be infeasible.) Or perhaps you could instruct them to extrapolate from your brain whether you would eventually approve of the modification, if the extrapolation wouldn’t create a sentient copy of you. (I’m not sure if that’s possible, but it might be.)
I trust the inhabitants of the future not to torture and enslave me. I don’t trust them not to be well-intentioned evil utilitarians who think nothing of overriding my instructions and preferences if that will make me happy. So I’d like to have the resources to be happy without anybody having to be evil to me.
But that wouldn’t be making you happy. It’d be making someone very much like you happy, but someone you wouldn’t have ever matured into. (You may still care that the latter person isn’t created, or not want to pay for cryonics just for the latter person to be created; that’s not the point.) I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you. Do you think they would prevent someone from committing suicide? If they would make unwanted modifications to you before reviving you, why wouldn’t they be willing to make modifications to unconsenting living people*? They would see your “do not revive unless...” instructions as a suicide note.
* Perhaps because they view you as a lower life form for which more paternalism is warranted than for normal transhuman.
Of course that’s not a strong argument. If you want to be that cautious, you can.
How about a scenario where they gave you something equivalent to a USB port, and the option to plug in an external, trivially removable module that gave you more conscious control over your emotional state but didn’t otherwise affect your emotions? That still involves brain surgery (to install the port), but it doesn’t really seem to be in the same category as current brain surgery at all.
Hmmm. That might work. However, the ability to conceptualize one way to achieve the necessary effect doesn’t guarantee that it’s ever going to be technically feasible. I can conceptualize various means of faster-than-light travel, too; it isn’t obliged to be physically possible.
I suspect I have a more complete and reality-connected model of how such a system might work than you have of ftl. :)
I’m basically positing a combination of more advanced biofeedback and non-pleasure-center-based wireheading, for the module: You plug it in, and it starts showing you readings for various systems, like biofeedback does, so that you can pinpoint what’s causing the problem on a physical level. Actually using the device would stimulate relevant brain-regions, or possibly regulate more body-based components of emotion like heart- and breathing-rate and muscle tension (via the brain regions that normally do that), or both.
I’m also assuming that there would be considerable protection against accidentally stimulating either the pleasure center or the wanting center, to preclude abuse, if they even make those regions stimulateable in the first place.
Of course I know how FTL works! It involves hyperspace! One gets there via hyperdrive! Then one can get from place to place hyper-fast! It’s all very hyper!
*ahem*
You have a point. But my more emotionally satisfying solution seems to be fairly promising. I’ll turn this over in my head more and it may serve as a fallback.
That seems like a fairly extreme outlier to me. I’m an extrovert, and for me that appears to mean simply that I prefer activities in which I interact with people to activities where I don’t interact with people.
Sounds like “five hours” might be something worth the pain of practicing to extend. Maybe not for you, but outlier time-brittle properties like that in me worry me.
Refraining from pushing the five hour limit harder than I have to is a very important part of my mood maintenance, which lets me not be on drugs, in danger of hurting myself, or just plain unhappy all the time. The farther I let myself get, the harder it is to muster the motivation to use my recovery strategies, and the longer they take to work.
From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.
I love having a good conversation with a friend. But I could also probably go for weeks without having such a thing. Probably the longest I’ve been alone is a week and I enjoyed it.
I can’t see from your viewpoint, but from my viewpoint you should do everything in your power to change how reliant you are on others. It seems like if you are so reliant on others that you are going to, consciously or not, change your values and beliefs merely to ensure that you have people who you can associate with.
I’m dependent on many things, and the ability to chat with people is one of the easiest to ensure among them. If I decide that I’m too dependent on external factors, I think I’ll kick the brie habit before I try to make my friends unnecessary.
I’m not sure whence your concern that I’ll change my values and beliefs to ensure that I have people I can associate with. I’d consider it really valuable evidence that something was wrong with my values and beliefs if nobody would speak to me because of them. That’s not the case—I have plenty of friends and little trouble making more when the opportunity presents itself—so I’m not sure why my beliefs and values might need to shift to ensure my supply.
Perhaps I misunderstood what your “dependency” actually is. If your dependency was that you really need people to approve of you (a classic dependency and the one I apparently wrongly assumed), then it seems like your psyche is going to be vastly molded by those around you.
If your dependency is one of human contact, than the pressure to conform would probably me much less of a thing to worry about.
I would like to address your first paragraph...”making your friends unnecessary” isn’t what I suggested. What I had in mind was making them not so necessary that you have to have contact with them every few hours.
Anyway, it’s all academic now, because if you don’t think it’s a problem, I certainly don’t think it’s a problem.
ETA: I did want to point out that I have changed over time. During my teenage years I was constantly trying to be popular and get others to like me. Now, I’m completely comfortable with being alone and others thinking I’m wrong or weird.
From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.
If you cannot so imagine then perhaps making judgements on what is ‘unhealthy’ for a person that does rely so acutely on others may not be entirely reliable. If someone clearly has a different neurological makeup it can be objectionable to either say they should act as you do or that they should have a different neurological makeup.
It is absolutely fascinating to me to see the ‘be more like me’ come from the less extroverted to the extrovert.
It is absolutely fascinating to me to see the ‘be more like me’ come from the less extroverted to the extrovert.
Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill. The fact that I need contact has made me, through sheer desperation and resulting time devoted to practice, okay at getting contact; but that’s something that was forced, not enabled, by my being an extrovert.
Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill.
Definitely. It could get you killed. It had me wondering, for example, if the ~5 hours figure is highly context dependent: You are on a hike with a friend and 12 hours from civilisation. Your friend breaks a leg. He is ok, but unable to move far and in need of medical attention. You need to get help. Does the fact that every step you take is bound up in your dear friend’s very survival help at all? Or is the brain like “No! Heroic symbolic connection sucks. Gimme talking or physical intimacy now. 5 hours I say!”? (No offence meant by mentioning a quirk of your personality as a matter of speculative curiosity. I just know the context and nature of isolation does make a difference to me, even though it takes around 5 weeks for such isolation to cause noticeable degradation of my sanity.)
If it was my handicap I would be perfectly fine with an FAI capping any distress at, say, the level you have after 3 hours. Similarly, if I was someone who was unable to endure 5 consecutive hours of high stimulus social exposure without discombobulating I would want to have that weakness removed. But many people object to being told that their natural state is unhealthy or otherwise defective and in need of repair and I consider that objection a valid one.
I would certainly endure the discomfort involved in saving my friend in the scenario you describe. I’d do the same thing if saving my friend involved an uncomfortable but non-fatal period of time without, say, water, food, or sleep. That doesn’t mean my brain wouldn’t report on its displeasure with the deprivation while I did so.
water ~ few days food ~ a few weeks sleep ~ a few days social contact ~ a handful of hours
Water depends on temperature, food on exertion both mental and physical. I speculate if the context influenced the rate of depletion in similar manner.
I very intentionally had qualifiers a-many in my comment to try and make it apparent that I wasn’t “judging” Alicorn. “I cannot imagine” is perhaps the wrong phrase. “I find it hard to imagine” would be better, I think.
Perhaps I’m crazy, but I don’t think pointing out the disadvantages of the way someone thinks/feels is or should be objectionable.
If someone differs from me in what kind of vegetables taste good, or if they like dry humor, or whatever, I’m not going to try and tell them they may want to rethink their position. There’s no salient disadvantages to those sort of things.
If Alicorn had said, “I really prefer human contact and I just get a little uncomfortable without it after 5 hours” I wouldn’t have even brought it up.
If someone has a trait that does have particular disadvantages, I just don’t see how discussing it with them is objectionable.
Perhaps the person to say whether it’s objectionable would be Alicorn. :)
I also think it’s extremely disproportionate to die because the old friends are gone. A post FAI world would be a Nice Enough Place that they will not even remotely mistreat you and you will not remotely regret your signing up.
I honestly do not think I’d last two weeks. If I go five conscious hours without having a substantial conversation with somebody I care about, I feel like I got hit by a brick wall. I’m pretty sure I only survived my teens because I had a pesky sister who prevented me from spending too long in psychologically self-destructive seclusion.
This sounds like an unrealistically huge discount rate. To be precise, you anticipate:
(a) One week of being really unhappy while you go through the process of making new friends (perhaps with someone else who’s really unhappy for similar reasons). I assume here that you do not find the process of “making a new friend” to be itself enjoyable enough to compensate. I also suspect that you would start getting over the psychological shock almost immediately, but let’s suppose it actually does take until you’ve made a friend deep enough to have intimate conversations with, and let’s suppose that this does take a whole week.
(b) N years of living happily ever after.
It’s really hard to see how the former observer-moments outweigh the latter observer-moments.
I think it’s this that commenters are probably trying to express when they wonder if you’re thinking in the mode we name “rational”: it seems more like a decision made by mentally fleeing from the sheer terror of imagining the worst possible instant of the worst possible scenario, than any choice made by weighing and balancing.
I also tend to think of cryonics as a prophylactic for freak occurrences rather than inevitable death of old age, meaning that if you sign up now and then have to get suspended in the next 10 years for some reason, I’d rate a pretty good chance that you wake up before all your friends are dead of old age. But that shouldn’t even be an issue. As soon as you weigh a week against N years, it looks pretty clear that you’re not making your decision around the most important stakes in the balance.
I know you don’t endorse consequentialism, but it seems to me that this is just exactly the sort of issue where careful verbal thinking really does help people in real life, a lot—when people make decisions by focusing on one stake that weighs huge in their thoughts but obviously isn’t the most important stake, where here the stakes are “how I (imagine) feeling in the very first instant of waking up” versus “how I feel for the rest of my entire second life”. Deontologist or not, I don’t see how you could argue that it would be a better world for everyone if we all made decisions that way. Once you point it out, it just seems like an obvious bias—for an expected utility maximizer, a formal bias; but obviously wrong even in an informal sense.
I think that the distress would itself inhibit me in my friend-making attempts. It is a skill that I have to apply, not a chemical reaction where if you put me in a room with a friendly stranger and stir, poof, friendship.
Um… would I deeply offend you if I suggested that, perhaps, your worst fears and nightmares are not 100% reflective of what would actually happen in reality? I mean, what you’re saying here is that if you wake up without friends, you’ll be so shocked and traumatized that you’ll never make any friends again ever, despite any future friend-finding or friend-making-prediction software that could potentially be brought to bear. You’re saying that your problem here is unsolvable in the long run by powers up to and including Friendly superintelligence and it just doesn’t seem like THAT LEVEL of difficulty. Or you’re saying that the short-run problem is so terrible, so agonizing, that no amount of future life and happiness can compensate for it, and once again it just doesn’t seem THAT BAD. And I’ve already talked about how pitting verbal thought against this sort of raw fear really is one of those places where rationality excels at actually improving our lives.
Are you sure this is your true rejection or is there something even worse waiting in the wings?
I’m making projections based on psychological facts about myself. Anticipating being friendless and alone makes me unhappy all by itself; but I do have some data on how I get when it actually happens. I don’t think I would be able to bring to bear these clever solutions if that happened (to the appropriate greater magnitude).
I do consider this a problem, so I am actively trying to arrange to have someone I’d find suitable signed up (in either direction would work for). This is probably a matter of time, since my top comment here did yield responses. I’d bet you money, if you like, that (barring financial disaster on my part) I’ll be signed up within the next two years.
I asked this elsewhere, but I’ll ask again: what if the unhappiness and distress caused by the lack of friends could suddenly just disappear? If you could voluntarily suppress it, or stop suppressing it? There will almost certainly be technology in a post-revival future to let you do that, and you could wake up with that ability already set up.
This is an interesting point to consider, and I’m one who’s offered a lot of reasons to not sign up for cryonics.
For the record, a lower bound on my “true rejection” is “I’d sign up if it was free”.
What about this: leave instructions with your body to not revive you until there is technology that would allow you to temporarily voluntarily suppress your isolation anxiety until you got adjusted to the new situation and made some friends.
If you don’t like how extraverted you are, you don’t have to put up with it after you get revived.
But the availability of such technology would not coincide with my volunteering to use it.
Would you be opposed to using it? Would you be opposed to not returning to consciousness until the technology had been set up for you (i.e. installed in your mind), so it would be immediately available?
I assign a negligible probability that there exists some way I’d find acceptable of achieving this result. It sounds way creepy to me.
I find that surprising. (I don’t mean to pass judgment at all. Values are values.) Would you call yourself a transhumanist? I wonder how many such people have creepy feelings about mind modifications like that. I would have thought it’s pretty small, but now I’m not sure. I wonder if reading certain fiction tends to change that attitude.
I would call myself a transhumanist, yes. Humans suck, let’s be something else—but I would want such changes to myself to be very carefully understood by me first, and if at all possible, directed from the inside. I mentioned elsewhere that I’d try cognitive exercises if someone proposed them. Brain surgery or drugs or equivalents, though, I am not open to without actually learning what the heck they’d entail (which would take more than the critical time period absent other unwelcome intervention), and these are the ones that seem captured by “technology”.
Hmm. What I had in mind isn’t something I would call brain surgery. It would be closer to a drug. My idea (pretty much an “outlook” from Egan’s Diaspora) is that your mind would be running in software, in a huge neuron simulator, and that the tech would simply inhibit the output of certain, targeted networks in your brain or enhance others. This would obviously be much more targeted than inert drugs could achieve. (I guess you might be able to achieve this in a physical brain with nanotech.)
I’m not sure if this changes your intuition any. Perhaps you would still be uncomfortable with it without understanding it first. But if you trust the people who would be reviving you to not torture and enslave you, you could conceivably leave enough detailed information about your preferences for you to trust them as a first-cut proxy on the mind modification decision. (Though that could easily be infeasible.) Or perhaps you could instruct them to extrapolate from your brain whether you would eventually approve of the modification, if the extrapolation wouldn’t create a sentient copy of you. (I’m not sure if that’s possible, but it might be.)
I trust the inhabitants of the future not to torture and enslave me. I don’t trust them not to be well-intentioned evil utilitarians who think nothing of overriding my instructions and preferences if that will make me happy. So I’d like to have the resources to be happy without anybody having to be evil to me.
But that wouldn’t be making you happy. It’d be making someone very much like you happy, but someone you wouldn’t have ever matured into. (You may still care that the latter person isn’t created, or not want to pay for cryonics just for the latter person to be created; that’s not the point.) I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you. Do you think they would prevent someone from committing suicide? If they would make unwanted modifications to you before reviving you, why wouldn’t they be willing to make modifications to unconsenting living people*? They would see your “do not revive unless...” instructions as a suicide note.
* Perhaps because they view you as a lower life form for which more paternalism is warranted than for normal transhuman.
Of course that’s not a strong argument. If you want to be that cautious, you can.
I don’t. I wouldn’t be very surprised to wake up modified in some popular way. I’m protecting the bits of me that I especially want safe.
Maybe.
Who says they’re not? (Or: Maybe living people are easier to convince.)
How about a scenario where they gave you something equivalent to a USB port, and the option to plug in an external, trivially removable module that gave you more conscious control over your emotional state but didn’t otherwise affect your emotions? That still involves brain surgery (to install the port), but it doesn’t really seem to be in the same category as current brain surgery at all.
Hmmm. That might work. However, the ability to conceptualize one way to achieve the necessary effect doesn’t guarantee that it’s ever going to be technically feasible. I can conceptualize various means of faster-than-light travel, too; it isn’t obliged to be physically possible.
I suspect I have a more complete and reality-connected model of how such a system might work than you have of ftl. :)
I’m basically positing a combination of more advanced biofeedback and non-pleasure-center-based wireheading, for the module: You plug it in, and it starts showing you readings for various systems, like biofeedback does, so that you can pinpoint what’s causing the problem on a physical level. Actually using the device would stimulate relevant brain-regions, or possibly regulate more body-based components of emotion like heart- and breathing-rate and muscle tension (via the brain regions that normally do that), or both.
I’m also assuming that there would be considerable protection against accidentally stimulating either the pleasure center or the wanting center, to preclude abuse, if they even make those regions stimulateable in the first place.
Of course I know how FTL works! It involves hyperspace! One gets there via hyperdrive! Then one can get from place to place hyper-fast! It’s all very hyper!
*ahem*
You have a point. But my more emotionally satisfying solution seems to be fairly promising. I’ll turn this over in my head more and it may serve as a fallback.
Wow. That isn’t an exaggerating? Is that what normal extraverts are like or are you an outlier. So hard to imagine.
That seems like a fairly extreme outlier to me. I’m an extrovert, and for me that appears to mean simply that I prefer activities in which I interact with people to activities where I don’t interact with people.
Nope, not exaggerating. I say “five hours” because I timed it. I don’t know if I’m an outlier or not; most of my friends are introverts themselves.
Sounds like “five hours” might be something worth the pain of practicing to extend. Maybe not for you, but outlier time-brittle properties like that in me worry me.
Refraining from pushing the five hour limit harder than I have to is a very important part of my mood maintenance, which lets me not be on drugs, in danger of hurting myself, or just plain unhappy all the time. The farther I let myself get, the harder it is to muster the motivation to use my recovery strategies, and the longer they take to work.
From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.
I love having a good conversation with a friend. But I could also probably go for weeks without having such a thing. Probably the longest I’ve been alone is a week and I enjoyed it.
I can’t see from your viewpoint, but from my viewpoint you should do everything in your power to change how reliant you are on others. It seems like if you are so reliant on others that you are going to, consciously or not, change your values and beliefs merely to ensure that you have people who you can associate with.
I’m dependent on many things, and the ability to chat with people is one of the easiest to ensure among them. If I decide that I’m too dependent on external factors, I think I’ll kick the brie habit before I try to make my friends unnecessary.
I’m not sure whence your concern that I’ll change my values and beliefs to ensure that I have people I can associate with. I’d consider it really valuable evidence that something was wrong with my values and beliefs if nobody would speak to me because of them. That’s not the case—I have plenty of friends and little trouble making more when the opportunity presents itself—so I’m not sure why my beliefs and values might need to shift to ensure my supply.
Perhaps I misunderstood what your “dependency” actually is. If your dependency was that you really need people to approve of you (a classic dependency and the one I apparently wrongly assumed), then it seems like your psyche is going to be vastly molded by those around you.
If your dependency is one of human contact, than the pressure to conform would probably me much less of a thing to worry about.
I would like to address your first paragraph...”making your friends unnecessary” isn’t what I suggested. What I had in mind was making them not so necessary that you have to have contact with them every few hours.
Anyway, it’s all academic now, because if you don’t think it’s a problem, I certainly don’t think it’s a problem.
ETA: I did want to point out that I have changed over time. During my teenage years I was constantly trying to be popular and get others to like me. Now, I’m completely comfortable with being alone and others thinking I’m wrong or weird.
Well, I like approval. But for the purposes of not being lonely, a heated argument will do!
If you cannot so imagine then perhaps making judgements on what is ‘unhealthy’ for a person that does rely so acutely on others may not be entirely reliable. If someone clearly has a different neurological makeup it can be objectionable to either say they should act as you do or that they should have a different neurological makeup.
It is absolutely fascinating to me to see the ‘be more like me’ come from the less extroverted to the extrovert.
Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill. The fact that I need contact has made me, through sheer desperation and resulting time devoted to practice, okay at getting contact; but that’s something that was forced, not enabled, by my being an extrovert.
Definitely. It could get you killed. It had me wondering, for example, if the ~5 hours figure is highly context dependent: You are on a hike with a friend and 12 hours from civilisation. Your friend breaks a leg. He is ok, but unable to move far and in need of medical attention. You need to get help. Does the fact that every step you take is bound up in your dear friend’s very survival help at all? Or is the brain like “No! Heroic symbolic connection sucks. Gimme talking or physical intimacy now. 5 hours I say!”? (No offence meant by mentioning a quirk of your personality as a matter of speculative curiosity. I just know the context and nature of isolation does make a difference to me, even though it takes around 5 weeks for such isolation to cause noticeable degradation of my sanity.)
If it was my handicap I would be perfectly fine with an FAI capping any distress at, say, the level you have after 3 hours. Similarly, if I was someone who was unable to endure 5 consecutive hours of high stimulus social exposure without discombobulating I would want to have that weakness removed. But many people object to being told that their natural state is unhealthy or otherwise defective and in need of repair and I consider that objection a valid one.
I would certainly endure the discomfort involved in saving my friend in the scenario you describe. I’d do the same thing if saving my friend involved an uncomfortable but non-fatal period of time without, say, water, food, or sleep. That doesn’t mean my brain wouldn’t report on its displeasure with the deprivation while I did so.
water ~ few days
food ~ a few weeks
sleep ~ a few days
social contact ~ a handful of hours
Water depends on temperature, food on exertion both mental and physical. I speculate if the context influenced the rate of depletion in similar manner.
I very intentionally had qualifiers a-many in my comment to try and make it apparent that I wasn’t “judging” Alicorn. “I cannot imagine” is perhaps the wrong phrase. “I find it hard to imagine” would be better, I think.
Perhaps I’m crazy, but I don’t think pointing out the disadvantages of the way someone thinks/feels is or should be objectionable.
If someone differs from me in what kind of vegetables taste good, or if they like dry humor, or whatever, I’m not going to try and tell them they may want to rethink their position. There’s no salient disadvantages to those sort of things.
If Alicorn had said, “I really prefer human contact and I just get a little uncomfortable without it after 5 hours” I wouldn’t have even brought it up.
If someone has a trait that does have particular disadvantages, I just don’t see how discussing it with them is objectionable.
Perhaps the person to say whether it’s objectionable would be Alicorn. :)
I also think it’s extremely disproportionate to die because the old friends are gone. A post FAI world would be a Nice Enough Place that they will not even remotely mistreat you and you will not remotely regret your signing up.