My $0.02 (or 1 utilon, if you prefer): “You can’t get out of the game.”
I agree completely that my identity would not carry forward in an unproblematic fashion across various hypothetical massive changes to my cognitive infrastructure (e.g. genuine sex changes, emotional alterations, intelligence augmentation, etc.).
I also believe my identity does not carry forward in an unproblematic fashion across various actual massive changes to my cognitive infrastructure (e.g adolescence, maturation, senescence, brain damage due to stroke, etc.).
I also believe that, to a lesser degree, my identity does not carry forward in an unproblematic fashion across various minor changes (e.g., falling in love). To a still-lesser degree, my identity doesn’t carry forward across entirely trivial changes (reading a book, writing this comment, waking up tomorrow).
So it’s not that I think my identity is somehow so robust that I can make changes to my cognitive infrastructure with impunity. Quite the contrary: my current identity, the one I have while writing this comment, is extremely fragile. Indeed, I have no confidence that it will continue to exist once I post this comment.
Of course, the nice thing about minor and trivial changes is that some things are preserved, which provides a sense of continuity, which is pleasant. Being content with an identity whose exact parameters are in continual flux seems to be a fairly robust attribute, for example; I have spent most of my adult life in configurations that share it. (It took me quite a few years to work my way into one, though.)
But, OK… what about massive changes, you ask? What if the SuperHappyFunPeople want to make it such that I don’t mind killing babies? As you say: what criteria do I apply?
That’s an excellent question, and I don’t have a good answer, but I’m basically not a conservative: I don’t think “so let’s not mess with emotions until we know how to answer that question” is a good answer.
My primary objection to it is that if we want the purported benefit of “playing it safe” in that way, it’s not enough to keep our emotions untouched. Our emotions are an integral part of the system that we are manipulating; we don’t protect anything worth protecting by leaving our emotions untouched while augmenting our intellect and altering our environments.
We’ve demonstrated this already: making large-scale changes to our environments while keeping our emotions unchanged has not been working particularly well for humanity thus far, and I have no reason to expect it to suddenly start working better as we make larger and larger environmental changes.
We can’t opt out.
So what do we do? Well, one compromise approach is experimentation with a safety switch.
That is, make the emotional change temporary in a way I cannot later alter… I spend an hour, or a day, or a year, or a hundred years with a particular massive cognitive alteration, and then I switch back to the configuration I started with, with my memories as close to intact as is possible. I can then ask myself “Would I, as I am now, rather remain as I am now, or go back to being like that full-time?”
(Nor need the changes be random, any more than I read random collections of letters today. There are billions of other people in the world, I could spend centuries just exploring changes that other people have recommended and seeing whether they work for me.)
That is, of course, no guarantee… it’s entirely possible that inconsistencies in my current configuration are such that, even given a choice, I choose a state that I consider wrong.
My own inclination at that point is to shrug my shoulders and say “OK, clearly the majority of my own society of mind wanted that state; the fact that a minority didn’t want it and expresses its preferences as moral beliefs doesn’t necessarily trump anything.”
That said, I do prefer consensus when the time to work on it is available, so I’d probably hold off on making a change I was conflicted about and instead prioritize either coming up with an alternate change all of me was in favor of, or becoming more consistent via more acceptable gradual steps. (In much the same way, I might oppose making a social change of which I approved but on which my community had not achieved consensus.)
Maybe I’d reset a few thousand times and experience different cognitive architectures for entire subjective pre-Singularity lifetimes, just to have a wide enough platform on which to base the next increment. (One could interpret the tradition of samsara as a form of this, for example.)
But my main point here is that none of this is unique to emotional changes. Lots of things can change my identity, and potentially cause me to do things that I had previously desired to not-do. (For example, right now I’m reading LW at work even though I had previously desired not to do that.) And running away from that by saying “Well, then let’s not change anything that can affect our identities” has some pretty awful costs, too.
Yes, we want to play it safe; there’s a lot of ways we can permanently screw up.
Huh. And having now read this (with which I agree), I really don’t understand what you think is special about modifying emotions. I mean, sure, change your emotions and you might not want to change back, but the same is true of having a psychotic break.
My $0.02 (or 1 utilon, if you prefer): “You can’t get out of the game.”
I agree completely that my identity would not carry forward in an unproblematic fashion across various hypothetical massive changes to my cognitive infrastructure (e.g. genuine sex changes, emotional alterations, intelligence augmentation, etc.).
I also believe my identity does not carry forward in an unproblematic fashion across various actual massive changes to my cognitive infrastructure (e.g adolescence, maturation, senescence, brain damage due to stroke, etc.).
I also believe that, to a lesser degree, my identity does not carry forward in an unproblematic fashion across various minor changes (e.g., falling in love). To a still-lesser degree, my identity doesn’t carry forward across entirely trivial changes (reading a book, writing this comment, waking up tomorrow).
So it’s not that I think my identity is somehow so robust that I can make changes to my cognitive infrastructure with impunity. Quite the contrary: my current identity, the one I have while writing this comment, is extremely fragile. Indeed, I have no confidence that it will continue to exist once I post this comment.
Of course, the nice thing about minor and trivial changes is that some things are preserved, which provides a sense of continuity, which is pleasant. Being content with an identity whose exact parameters are in continual flux seems to be a fairly robust attribute, for example; I have spent most of my adult life in configurations that share it. (It took me quite a few years to work my way into one, though.)
But, OK… what about massive changes, you ask? What if the SuperHappyFunPeople want to make it such that I don’t mind killing babies? As you say: what criteria do I apply?
That’s an excellent question, and I don’t have a good answer, but I’m basically not a conservative: I don’t think “so let’s not mess with emotions until we know how to answer that question” is a good answer.
My primary objection to it is that if we want the purported benefit of “playing it safe” in that way, it’s not enough to keep our emotions untouched. Our emotions are an integral part of the system that we are manipulating; we don’t protect anything worth protecting by leaving our emotions untouched while augmenting our intellect and altering our environments.
We’ve demonstrated this already: making large-scale changes to our environments while keeping our emotions unchanged has not been working particularly well for humanity thus far, and I have no reason to expect it to suddenly start working better as we make larger and larger environmental changes.
We can’t opt out.
So what do we do? Well, one compromise approach is experimentation with a safety switch.
That is, make the emotional change temporary in a way I cannot later alter… I spend an hour, or a day, or a year, or a hundred years with a particular massive cognitive alteration, and then I switch back to the configuration I started with, with my memories as close to intact as is possible. I can then ask myself “Would I, as I am now, rather remain as I am now, or go back to being like that full-time?”
(Nor need the changes be random, any more than I read random collections of letters today. There are billions of other people in the world, I could spend centuries just exploring changes that other people have recommended and seeing whether they work for me.)
That is, of course, no guarantee… it’s entirely possible that inconsistencies in my current configuration are such that, even given a choice, I choose a state that I consider wrong.
My own inclination at that point is to shrug my shoulders and say “OK, clearly the majority of my own society of mind wanted that state; the fact that a minority didn’t want it and expresses its preferences as moral beliefs doesn’t necessarily trump anything.”
That said, I do prefer consensus when the time to work on it is available, so I’d probably hold off on making a change I was conflicted about and instead prioritize either coming up with an alternate change all of me was in favor of, or becoming more consistent via more acceptable gradual steps. (In much the same way, I might oppose making a social change of which I approved but on which my community had not achieved consensus.)
Maybe I’d reset a few thousand times and experience different cognitive architectures for entire subjective pre-Singularity lifetimes, just to have a wide enough platform on which to base the next increment. (One could interpret the tradition of samsara as a form of this, for example.)
But my main point here is that none of this is unique to emotional changes. Lots of things can change my identity, and potentially cause me to do things that I had previously desired to not-do. (For example, right now I’m reading LW at work even though I had previously desired not to do that.) And running away from that by saying “Well, then let’s not change anything that can affect our identities” has some pretty awful costs, too.
Yes, we want to play it safe; there’s a lot of ways we can permanently screw up.
But let’s not pretend we can opt out of the game.
Huh. And having now read this (with which I agree), I really don’t understand what you think is special about modifying emotions. I mean, sure, change your emotions and you might not want to change back, but the same is true of having a psychotic break.