Would you be opposed to using it? Would you be opposed to not returning to consciousness until the technology had been set up for you (i.e. installed in your mind), so it would be immediately available?
I find that surprising. (I don’t mean to pass judgment at all. Values are values.) Would you call yourself a transhumanist? I wonder how many such people have creepy feelings about mind modifications like that. I would have thought it’s pretty small, but now I’m not sure. I wonder if reading certain fiction tends to change that attitude.
I would call myself a transhumanist, yes. Humans suck, let’s be something else—but I would want such changes to myself to be very carefully understood by me first, and if at all possible, directed from the inside. I mentioned elsewhere that I’d try cognitive exercises if someone proposed them. Brain surgery or drugs or equivalents, though, I am not open to without actually learning what the heck they’d entail (which would take more than the critical time period absent other unwelcome intervention), and these are the ones that seem captured by “technology”.
Hmm. What I had in mind isn’t something I would call brain surgery. It would be closer to a drug. My idea (pretty much an “outlook” from Egan’s Diaspora) is that your mind would be running in software, in a huge neuron simulator, and that the tech would simply inhibit the output of certain, targeted networks in your brain or enhance others. This would obviously be much more targeted than inert drugs could achieve. (I guess you might be able to achieve this in a physical brain with nanotech.)
I’m not sure if this changes your intuition any. Perhaps you would still be uncomfortable with it without understanding it first. But if you trust the people who would be reviving you to not torture and enslave you, you could conceivably leave enough detailed information about your preferences for you to trust them as a first-cut proxy on the mind modification decision. (Though that could easily be infeasible.) Or perhaps you could instruct them to extrapolate from your brain whether you would eventually approve of the modification, if the extrapolation wouldn’t create a sentient copy of you. (I’m not sure if that’s possible, but it might be.)
I trust the inhabitants of the future not to torture and enslave me. I don’t trust them not to be well-intentioned evil utilitarians who think nothing of overriding my instructions and preferences if that will make me happy. So I’d like to have the resources to be happy without anybody having to be evil to me.
But that wouldn’t be making you happy. It’d be making someone very much like you happy, but someone you wouldn’t have ever matured into. (You may still care that the latter person isn’t created, or not want to pay for cryonics just for the latter person to be created; that’s not the point.) I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you. Do you think they would prevent someone from committing suicide? If they would make unwanted modifications to you before reviving you, why wouldn’t they be willing to make modifications to unconsenting living people*? They would see your “do not revive unless...” instructions as a suicide note.
* Perhaps because they view you as a lower life form for which more paternalism is warranted than for normal transhuman.
Of course that’s not a strong argument. If you want to be that cautious, you can.
How about a scenario where they gave you something equivalent to a USB port, and the option to plug in an external, trivially removable module that gave you more conscious control over your emotional state but didn’t otherwise affect your emotions? That still involves brain surgery (to install the port), but it doesn’t really seem to be in the same category as current brain surgery at all.
Hmmm. That might work. However, the ability to conceptualize one way to achieve the necessary effect doesn’t guarantee that it’s ever going to be technically feasible. I can conceptualize various means of faster-than-light travel, too; it isn’t obliged to be physically possible.
I suspect I have a more complete and reality-connected model of how such a system might work than you have of ftl. :)
I’m basically positing a combination of more advanced biofeedback and non-pleasure-center-based wireheading, for the module: You plug it in, and it starts showing you readings for various systems, like biofeedback does, so that you can pinpoint what’s causing the problem on a physical level. Actually using the device would stimulate relevant brain-regions, or possibly regulate more body-based components of emotion like heart- and breathing-rate and muscle tension (via the brain regions that normally do that), or both.
I’m also assuming that there would be considerable protection against accidentally stimulating either the pleasure center or the wanting center, to preclude abuse, if they even make those regions stimulateable in the first place.
Of course I know how FTL works! It involves hyperspace! One gets there via hyperdrive! Then one can get from place to place hyper-fast! It’s all very hyper!
*ahem*
You have a point. But my more emotionally satisfying solution seems to be fairly promising. I’ll turn this over in my head more and it may serve as a fallback.
But the availability of such technology would not coincide with my volunteering to use it.
Would you be opposed to using it? Would you be opposed to not returning to consciousness until the technology had been set up for you (i.e. installed in your mind), so it would be immediately available?
I assign a negligible probability that there exists some way I’d find acceptable of achieving this result. It sounds way creepy to me.
I find that surprising. (I don’t mean to pass judgment at all. Values are values.) Would you call yourself a transhumanist? I wonder how many such people have creepy feelings about mind modifications like that. I would have thought it’s pretty small, but now I’m not sure. I wonder if reading certain fiction tends to change that attitude.
I would call myself a transhumanist, yes. Humans suck, let’s be something else—but I would want such changes to myself to be very carefully understood by me first, and if at all possible, directed from the inside. I mentioned elsewhere that I’d try cognitive exercises if someone proposed them. Brain surgery or drugs or equivalents, though, I am not open to without actually learning what the heck they’d entail (which would take more than the critical time period absent other unwelcome intervention), and these are the ones that seem captured by “technology”.
Hmm. What I had in mind isn’t something I would call brain surgery. It would be closer to a drug. My idea (pretty much an “outlook” from Egan’s Diaspora) is that your mind would be running in software, in a huge neuron simulator, and that the tech would simply inhibit the output of certain, targeted networks in your brain or enhance others. This would obviously be much more targeted than inert drugs could achieve. (I guess you might be able to achieve this in a physical brain with nanotech.)
I’m not sure if this changes your intuition any. Perhaps you would still be uncomfortable with it without understanding it first. But if you trust the people who would be reviving you to not torture and enslave you, you could conceivably leave enough detailed information about your preferences for you to trust them as a first-cut proxy on the mind modification decision. (Though that could easily be infeasible.) Or perhaps you could instruct them to extrapolate from your brain whether you would eventually approve of the modification, if the extrapolation wouldn’t create a sentient copy of you. (I’m not sure if that’s possible, but it might be.)
I trust the inhabitants of the future not to torture and enslave me. I don’t trust them not to be well-intentioned evil utilitarians who think nothing of overriding my instructions and preferences if that will make me happy. So I’d like to have the resources to be happy without anybody having to be evil to me.
But that wouldn’t be making you happy. It’d be making someone very much like you happy, but someone you wouldn’t have ever matured into. (You may still care that the latter person isn’t created, or not want to pay for cryonics just for the latter person to be created; that’s not the point.) I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you. Do you think they would prevent someone from committing suicide? If they would make unwanted modifications to you before reviving you, why wouldn’t they be willing to make modifications to unconsenting living people*? They would see your “do not revive unless...” instructions as a suicide note.
* Perhaps because they view you as a lower life form for which more paternalism is warranted than for normal transhuman.
Of course that’s not a strong argument. If you want to be that cautious, you can.
I don’t. I wouldn’t be very surprised to wake up modified in some popular way. I’m protecting the bits of me that I especially want safe.
Maybe.
Who says they’re not? (Or: Maybe living people are easier to convince.)
How about a scenario where they gave you something equivalent to a USB port, and the option to plug in an external, trivially removable module that gave you more conscious control over your emotional state but didn’t otherwise affect your emotions? That still involves brain surgery (to install the port), but it doesn’t really seem to be in the same category as current brain surgery at all.
Hmmm. That might work. However, the ability to conceptualize one way to achieve the necessary effect doesn’t guarantee that it’s ever going to be technically feasible. I can conceptualize various means of faster-than-light travel, too; it isn’t obliged to be physically possible.
I suspect I have a more complete and reality-connected model of how such a system might work than you have of ftl. :)
I’m basically positing a combination of more advanced biofeedback and non-pleasure-center-based wireheading, for the module: You plug it in, and it starts showing you readings for various systems, like biofeedback does, so that you can pinpoint what’s causing the problem on a physical level. Actually using the device would stimulate relevant brain-regions, or possibly regulate more body-based components of emotion like heart- and breathing-rate and muscle tension (via the brain regions that normally do that), or both.
I’m also assuming that there would be considerable protection against accidentally stimulating either the pleasure center or the wanting center, to preclude abuse, if they even make those regions stimulateable in the first place.
Of course I know how FTL works! It involves hyperspace! One gets there via hyperdrive! Then one can get from place to place hyper-fast! It’s all very hyper!
*ahem*
You have a point. But my more emotionally satisfying solution seems to be fairly promising. I’ll turn this over in my head more and it may serve as a fallback.