I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity.
Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage. I suspect that a faithful simulation of just my Brodmann area 10 coupled with a large chunk of my memories would restore enough of my self-awareness to be considered “me”. This sim-me would probably lose most of my emotions without the rest of the brain, but it is still infinitely better than none.
You’ll need the rest of the brain because these other memories would be distributed throughout the rest of your cortex. The hippocampus only contains recent episodic memories.
If you lost your temporal lobe, for example, you’d lose all non-episodic knowledge concerning what the names of things are, how they are categorized, and what the relationships between them are.
That said, I’m not sure why I should care much about having my non-episodic knowledge replaced with an off-the-shelf encyclopedia module. I don’t identify with it much.
If you only kept the hippocampus, you’d lose your non-recent episodic memories too. But technical issues aside, let me defend the “encyclopedia”:
Episodic memory is basically a cassette reel of your life, along with a few personalized associations and maybe memories of thoughts and emotions. Everything that we associate with the word knowledge is non-episodic. It’s not just verbal labels—that was just a handy example that I happened to know the brain region for. I’d actually care about that stuff more about non-episodic memories than the episodic stuff.
Things like “what is your wife’s name and what does her face look like” are non-episodic memory. You don’t have to think back to a time when you specifically saw your wife to remember what her name and face is, and that you love her—that information is treated as a fact independent of any specific memory, indelibly etched into your model of the world. Cognitively speaking, “I love my wife stacy, she looks like this” is as much of a fact as “grass is a green plant” and they are both non-episodic memories. Your episodic memory reel wouldn’t even make sense without that sort of information. I’d still identify someone with memory loss, but retaining my non-episodic memory, as me. I’d identify someone with only my episodic memories as someone else, looking at a reel of memory that does not belong to them and means nothing to them.
(Trigger Warning: link contains writing in diary which is sad, horrifying, and nonfiction.): This is what complete episodic memory loss looks like. Patients like this can still remember the names of faces of people they love.
Ironically...the (area 10) might actually be replaceable. I’m not sure whether any personalized memories are kept there—I don’t know what that specific region does but it’s in an area that mostly deals with executive function—which is important for personality, but not necessarily individuality.
Ironically...the (area 10) might actually be replaceable. I’m not sure whether any personalized memories are kept there—I don’t know what that specific region does but it’s in an area that mostly deals with executive function—which is important for personality, but not necessarily individuality.
What’s the difference between personality and individuality?
Personality is a set of dichotomous variables plotted on a bell curve. “Einstein was extroverted, charismatic, nonconforming, and prone to absent-mindedness” describes his personality. We all have these traits in various amounts. You can some of these personality nobs really easily with drugs. I can’t specify Einstein out of every person in the world using only his personality traits—I can only specify individuals similar to him.
Individuality is stuff that’s specific to the person. “Einstein’s second marriage was to his cousin and he had at least 6 affairs. He admired Spinoza, and was a contemporary of Tagore. He was a socialist and cared about civil rights. He had always thought there was something wrong about refrigerators.” Not all of these are dichotomous variables—you either spoke to Tagore or you didn’t. And it makes no sense to put people on a “satisfaction with Refrigerators” spectrum, even though I suppose you could if you wanted to. And all this information together specifically points to Einstein, and no one else in the world. Everyone in the world a set of unique traits like fingerprints—and it doesn’t even make sense to ask what the “average” is, since most of the variables don’t exist on the same dimension.
And...well, when it comes to Area 10, just intuitively, do you really want to define yourself by a few variables that influence your executive function? Personally I define myself partially by my ideas, and partially by my values...and the former is definitely in the “individuality” territory.
OK, I understand what you mean by personality vs individuality. However, I doubt that the functionality of BA10 can be described “by a few variables that influence your executive function”. Then again, no one knows anything definite about it.
I take it you’re assuming that information about my husband, and about my relationship to my husband, isn’t in the encyclopedia module along with information about mice and omelettes and your relationship to your wife.
If that’s true, then sure, I’d prefer not to lose that information.
Well...yeah, I was. I thought the whole idea of having an encyclopedia was to eliminate redundancy through standardization of the parts of the brain that were not important for individuality?
If your husband and my husband, your omelette and my omelette, are all stored in the encyclopedia, it wouldn’t be a “off-the-shelf encyclopedia module” anymore. It would be an index containing individual people’s non-episodic knowledge. At that point, it’s just an index of partial uploads. We can’t standardize that encyclopedia to everyone: If the the thing that stores your omelette and your husband went around viewing my episodic reel and knowing all the personal stuff about my omelette and husband...that would be weird and the resulting being would be very confused (let alone if the entire human race was in there—I’m not sure how that would even work).
(Also, going back into the technical stuff, there may or may not be a solid dividing line between very old episodic memory and non-episodoc memory
Sure, if your omelette and my omelette are so distinct that there is no common data structure that can serve as a referent for both, and ditto for all the other people in the world, then the whole idea of an encyclopedia falls apart. But that doesn’t seem terribly likely to me.
Your concept of an omelette probably isn’t exactly isomorphic to mine, but there’s probably a parametrizable omelette data structure we can construct that, along with a handful of parameter settings for each individual, can capture everyone’s omelette. The parameter settings go in the representation of the individual; the omelette data structure goes in the encyclopedia.
And, in addition, there’s a bunch of individualizing episodic memory on top of that… memories of cooking particular omelettes, of learning to cook an omelette, of learning particular recipes, of that time what ought to have been an omelette turned into a black smear on the pan, etc. And each of those episodic memories refers to the shared omelette data structure, but is stored with and is unique to the uploaded agent. (Maybe. It may turn out that our individual episodic memories have a lot in common as well, such that we can store a standard lifetime’s memories in the shared encyclopedia and just store a few million bits of parameter settings in each individual profile. I suspect we overestimate how unique our personal narratives are, honestly.)
Similarly, it may be that our relationships with our husbands are so distinct that there is no common data structure that can serve as a referent for both. But that doesn’t seem terribly likely to me. Your relationship with your husband isn’t exactly isomorphic to mine, of course, but it can likely similarly be captured by a common parameterizable relationship-to-husband data structure.
As for the actual individual who happens to be my husband, well, the majority of the information about him is common to all kinds of relationships with any number of people. He is his father’s son and his stepmother’s stepson and my mom’s son-in-law and so on and so forth. And, sure, each of those people knows different things, but they know those things about the same person; there is a central core. That core goes in the encyclopedia, and pointers to what subset each person knows about him goes in their individual profiles (along with their personal experiences and whatever idiosyncratic beliefs they have about him).
So, yes, I would say that your husband and my husband and your omelette and my omelette are all stored in the encyclopedia. You can call that an index of partial uploads if you like, but it fails to incorporate whatever additional computations that create first-person experience. It’s just a passive data structure.
Incidentally and unrelatedly, I’m not nearly as committed as you sound to preserving our current ignorance of one another’s perspective in this new architecture.
I’m really skeptical that parametric functions which vary on dimensions concerning omelets (Egg species? Color? ingredients? How does this even work?) are a more efficient or more accurate way of preserving what our wetware encode when compared to simulating the neural networks devoted dealing with omelettes. I wouldn’t even know how to start working on the problem mapping a conceptual representation of an omelette into parametric functions (unless we’re just using the parametric functions to model the properties of individual neurons—that’s fine).
Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
Incidentally and unrelatedly, I’m not nearly as committed as you sound to preserving our current ignorance of one another’s perspective in this new architecture.
I was more worried that it might break stuff (as in, resulting beings would need to be built quite differently in order to function) if one-another’s perspectives would overlap. Also, that brings us back to the original question I was raising about living forever—what exactly is it that we value and want to preserve?
Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
Not really. If I were serious about implementing this, I would start collecting distinct instances of omelette-concepts and analyzing them for variation, but I’m not going to do that. My expectation is that if I did, the most useful dimensions of variability would not map to any attributes that we would ordinarily think of or have English words for.
Perhaps what I have in mind can be said more clearly this way: there’s a certain amount of information that picks out the space of all human omelette-concepts from the space of all possible concepts… call that bitstring S1. There’s a certain amount of information that picks out the space of my omelette-concept from the space of all human omelette-concepts… call that bitstring S2.
S2 is much, much, shorter than S1.
It’s inefficient to have 7 billion human minds each of which is taking up valuable bits storing 7 billion copies of S1 along with their individual S2s. Why in the world would we do that, positing an architecture that didn’t physically require it? Run a bloody compression algorithm, store S1 somewhere, have each human mind refer to it.
I have no idea what S1 or S2 are.
And I don’t expect that they’re expressible in words, any more than I can express which pieces of a movie are stored as indexed substrings… it’s not like MPEG compression of a movie of an auto race creates an indexed “car” data structure with parameters representing color, make, model, etc. It just identifies repeated substrings and indexes them, and takes advantage of the fact that sequential frames share many substrings in common if properly parsed.
But I’m committed enough to a computational model of human concept storage that I believe they exist. (Of course, it’s possible that our concept-space of an omelette simply can’t be picked out by a bit-string, but I can’t see why I should take that possibility seriously.)
Oh, and agreed that we would change if we were capable of sharing one another’s perspectives. I’m not particularly interested in preserving my current cognitive isolation from other humans, though… I value it, but I value it less than I value the ability to easily share perspectives, and they seem to be opposed values.
My non-episodic memory contains the “facts” that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren’t an interesting band. My boyfriend’s non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd’s music is trancendentally good).
Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were in my non-episodic memory. More than that, I would also lose my sense of self if I gained contradictory memories. I would need to have my non-episodic memories and not have the facts from my boyfriend’s memory.
That’s the reason why “off the shelf” doesn’t sound suitable in this context.
So, on one level, my response to this is similar to the one I gave (a few years ago) [http://lesswrong.com/lw/qx/timeless_identity/9trc]… I agree that there’s a personal relationship with BtVS, just like there’s a personal relationship with my husband, that we’d want to preserve if we wanted to perfectly preserve me.
I was merely arguing that the bitlength of that personal information is much less than the actual information content of my brain, and there’s a great deal of compression leverage to be gained by taking the shared memories of BtVS out of both of your heads (and the other thousands of viewers) and replacing them with pointers to a common library representation of the show and then have your personal relationship refer to the common library representation rather than your private copy.
The personal relationship remains local and private, but it takes up way less space than your mind currently does.
That said… coming back to this conversation after three years, I’m finding I just care less and less about preserving whatever sense of self depends on these sorts of idiosyncratic judgments.
I mean, when you try to recall a BtVS episode, your memory is imperfect… if you watch it again, you’ll uncover all sorts of information you either forgot or remembered wrong. If I offered to give you perfect eideitic recall of BtVS—no distortion of your current facts about the goodness of it, except insofar as those facts turn out to be incompatible with an actual perception (e.g., you’d have changed your mind if you watched it again on TV, too) -- would you take it?
I would. I mean, ultimately, what does it matter if I replace my current vague memory of the soap opera Spike was obsessively watching with a more specific memory of its name and whatever else we learned about it? Yes, that vague memory is part of my unique identity, I guess, in that nobody else has quite exactly that vague memory… but so what? That’s not enough to make it worth preserving.
And for all I know, maybe you agree with me… maybe you don’t want to preserve your private “facts” about what kind of tie Giles was wearing when Angel tortured him, etc., but you draw the line at losing your private “facts” about how good the show was. Which is fine, you care about what you care about.
But if you told me right now that I’m actually an upload with reconstructed memories, and that there was a glitch such that my current “facts” about BTVS being a good show for its time is mis-reconstructed, and Dave before he died thought it was mediocre… well, so what?
I mean, before my stroke, I really disliked peppers. After my stroke, peppers tasted pretty good. This was startling, but it posed no sort of challenge to my sense of self.
Apparently (Me + likes peppers) ~= (Me + dislikes peppers) as far as I’m concerned.
I suspect there’s a million other things like that.
Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage.
There’s a very wide range of possible minds I consider to preserve my identity; I’m not sure the majority of those emulate my prefrontal cortex significantly more closely than they emulate yours, and the majority of my memories are not shared by the majority of those minds.
Interesting. I wonder what you would consider a mind that preserves your identity. For example, I assume that the total of your posts online, plus whatever other information available without some hypothetical future brain scanner, all running as a process on some simulator, is probably not enough.
At one extreme, if I assume those posts are being used to create a me-simulation by me-simulation-creator that literally knows nothing else about humans, then I’m pretty confident that the result is nothing I would identify with. (I’m also pretty sure this scenario is internally inconsistent.)
At another extreme, if I assume the me-simulation-creator has access to a standard template for my general demographic and is just looking to customize that template sufficiently to pick out some subset of the volume of mindspace my sufficiently preserved identity defines… then maybe. I’d have to think a lot harder about what information is in my online posts and what information would plausibly be in such a template to even express a confidence interval about that.
That said, I’m certainly not comfortable treating the result of that process as preserving “me.”
Then again I’m also not comfortable treating the result of living a thousand years as preserving “me.”
Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage. I suspect that a faithful simulation of just my Brodmann area 10 coupled with a large chunk of my memories would restore enough of my self-awareness to be considered “me”. This sim-me would probably lose most of my emotions without the rest of the brain, but it is still infinitely better than none.
You’ll need the rest of the brain because these other memories would be distributed throughout the rest of your cortex. The hippocampus only contains recent episodic memories.
If you lost your temporal lobe, for example, you’d lose all non-episodic knowledge concerning what the names of things are, how they are categorized, and what the relationships between them are.
That said, I’m not sure why I should care much about having my non-episodic knowledge replaced with an off-the-shelf encyclopedia module. I don’t identify with it much.
If you only kept the hippocampus, you’d lose your non-recent episodic memories too. But technical issues aside, let me defend the “encyclopedia”:
Episodic memory is basically a cassette reel of your life, along with a few personalized associations and maybe memories of thoughts and emotions. Everything that we associate with the word knowledge is non-episodic. It’s not just verbal labels—that was just a handy example that I happened to know the brain region for. I’d actually care about that stuff more about non-episodic memories than the episodic stuff.
Things like “what is your wife’s name and what does her face look like” are non-episodic memory. You don’t have to think back to a time when you specifically saw your wife to remember what her name and face is, and that you love her—that information is treated as a fact independent of any specific memory, indelibly etched into your model of the world. Cognitively speaking, “I love my wife stacy, she looks like this” is as much of a fact as “grass is a green plant” and they are both non-episodic memories. Your episodic memory reel wouldn’t even make sense without that sort of information. I’d still identify someone with memory loss, but retaining my non-episodic memory, as me. I’d identify someone with only my episodic memories as someone else, looking at a reel of memory that does not belong to them and means nothing to them.
(Trigger Warning: link contains writing in diary which is sad, horrifying, and nonfiction.): This is what complete episodic memory loss looks like. Patients like this can still remember the names of faces of people they love.
Ironically...the (area 10) might actually be replaceable. I’m not sure whether any personalized memories are kept there—I don’t know what that specific region does but it’s in an area that mostly deals with executive function—which is important for personality, but not necessarily individuality.
What’s the difference between personality and individuality?
In my head:
Personality is a set of dichotomous variables plotted on a bell curve. “Einstein was extroverted, charismatic, nonconforming, and prone to absent-mindedness” describes his personality. We all have these traits in various amounts. You can some of these personality nobs really easily with drugs. I can’t specify Einstein out of every person in the world using only his personality traits—I can only specify individuals similar to him.
Individuality is stuff that’s specific to the person. “Einstein’s second marriage was to his cousin and he had at least 6 affairs. He admired Spinoza, and was a contemporary of Tagore. He was a socialist and cared about civil rights. He had always thought there was something wrong about refrigerators.” Not all of these are dichotomous variables—you either spoke to Tagore or you didn’t. And it makes no sense to put people on a “satisfaction with Refrigerators” spectrum, even though I suppose you could if you wanted to. And all this information together specifically points to Einstein, and no one else in the world. Everyone in the world a set of unique traits like fingerprints—and it doesn’t even make sense to ask what the “average” is, since most of the variables don’t exist on the same dimension.
And...well, when it comes to Area 10, just intuitively, do you really want to define yourself by a few variables that influence your executive function? Personally I define myself partially by my ideas, and partially by my values...and the former is definitely in the “individuality” territory.
OK, I understand what you mean by personality vs individuality. However, I doubt that the functionality of BA10 can be described “by a few variables that influence your executive function”. Then again, no one knows anything definite about it.
I take it you’re assuming that information about my husband, and about my relationship to my husband, isn’t in the encyclopedia module along with information about mice and omelettes and your relationship to your wife.
If that’s true, then sure, I’d prefer not to lose that information.
Well...yeah, I was. I thought the whole idea of having an encyclopedia was to eliminate redundancy through standardization of the parts of the brain that were not important for individuality?
If your husband and my husband, your omelette and my omelette, are all stored in the encyclopedia, it wouldn’t be a “off-the-shelf encyclopedia module” anymore. It would be an index containing individual people’s non-episodic knowledge. At that point, it’s just an index of partial uploads. We can’t standardize that encyclopedia to everyone: If the the thing that stores your omelette and your husband went around viewing my episodic reel and knowing all the personal stuff about my omelette and husband...that would be weird and the resulting being would be very confused (let alone if the entire human race was in there—I’m not sure how that would even work).
(Also, going back into the technical stuff, there may or may not be a solid dividing line between very old episodic memory and non-episodoc memory
Sure, if your omelette and my omelette are so distinct that there is no common data structure that can serve as a referent for both, and ditto for all the other people in the world, then the whole idea of an encyclopedia falls apart. But that doesn’t seem terribly likely to me.
Your concept of an omelette probably isn’t exactly isomorphic to mine, but there’s probably a parametrizable omelette data structure we can construct that, along with a handful of parameter settings for each individual, can capture everyone’s omelette. The parameter settings go in the representation of the individual; the omelette data structure goes in the encyclopedia.
And, in addition, there’s a bunch of individualizing episodic memory on top of that… memories of cooking particular omelettes, of learning to cook an omelette, of learning particular recipes, of that time what ought to have been an omelette turned into a black smear on the pan, etc. And each of those episodic memories refers to the shared omelette data structure, but is stored with and is unique to the uploaded agent. (Maybe. It may turn out that our individual episodic memories have a lot in common as well, such that we can store a standard lifetime’s memories in the shared encyclopedia and just store a few million bits of parameter settings in each individual profile. I suspect we overestimate how unique our personal narratives are, honestly.)
Similarly, it may be that our relationships with our husbands are so distinct that there is no common data structure that can serve as a referent for both. But that doesn’t seem terribly likely to me. Your relationship with your husband isn’t exactly isomorphic to mine, of course, but it can likely similarly be captured by a common parameterizable relationship-to-husband data structure.
As for the actual individual who happens to be my husband, well, the majority of the information about him is common to all kinds of relationships with any number of people. He is his father’s son and his stepmother’s stepson and my mom’s son-in-law and so on and so forth. And, sure, each of those people knows different things, but they know those things about the same person; there is a central core. That core goes in the encyclopedia, and pointers to what subset each person knows about him goes in their individual profiles (along with their personal experiences and whatever idiosyncratic beliefs they have about him).
So, yes, I would say that your husband and my husband and your omelette and my omelette are all stored in the encyclopedia. You can call that an index of partial uploads if you like, but it fails to incorporate whatever additional computations that create first-person experience. It’s just a passive data structure.
Incidentally and unrelatedly, I’m not nearly as committed as you sound to preserving our current ignorance of one another’s perspective in this new architecture.
I’m really skeptical that parametric functions which vary on dimensions concerning omelets (Egg species? Color? ingredients? How does this even work?) are a more efficient or more accurate way of preserving what our wetware encode when compared to simulating the neural networks devoted dealing with omelettes. I wouldn’t even know how to start working on the problem mapping a conceptual representation of an omelette into parametric functions (unless we’re just using the parametric functions to model the properties of individual neurons—that’s fine).
Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
I was more worried that it might break stuff (as in, resulting beings would need to be built quite differently in order to function) if one-another’s perspectives would overlap. Also, that brings us back to the original question I was raising about living forever—what exactly is it that we value and want to preserve?
Not really. If I were serious about implementing this, I would start collecting distinct instances of omelette-concepts and analyzing them for variation, but I’m not going to do that. My expectation is that if I did, the most useful dimensions of variability would not map to any attributes that we would ordinarily think of or have English words for.
Perhaps what I have in mind can be said more clearly this way: there’s a certain amount of information that picks out the space of all human omelette-concepts from the space of all possible concepts… call that bitstring S1. There’s a certain amount of information that picks out the space of my omelette-concept from the space of all human omelette-concepts… call that bitstring S2.
S2 is much, much, shorter than S1.
It’s inefficient to have 7 billion human minds each of which is taking up valuable bits storing 7 billion copies of S1 along with their individual S2s. Why in the world would we do that, positing an architecture that didn’t physically require it? Run a bloody compression algorithm, store S1 somewhere, have each human mind refer to it.
I have no idea what S1 or S2 are.
And I don’t expect that they’re expressible in words, any more than I can express which pieces of a movie are stored as indexed substrings… it’s not like MPEG compression of a movie of an auto race creates an indexed “car” data structure with parameters representing color, make, model, etc. It just identifies repeated substrings and indexes them, and takes advantage of the fact that sequential frames share many substrings in common if properly parsed.
But I’m committed enough to a computational model of human concept storage that I believe they exist. (Of course, it’s possible that our concept-space of an omelette simply can’t be picked out by a bit-string, but I can’t see why I should take that possibility seriously.)
Oh, and agreed that we would change if we were capable of sharing one another’s perspectives.
I’m not particularly interested in preserving my current cognitive isolation from other humans, though… I value it, but I value it less than I value the ability to easily share perspectives, and they seem to be opposed values.
I think I’ve got a good response for this one.
My non-episodic memory contains the “facts” that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren’t an interesting band. My boyfriend’s non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd’s music is trancendentally good).
Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were in my non-episodic memory. More than that, I would also lose my sense of self if I gained contradictory memories. I would need to have my non-episodic memories and not have the facts from my boyfriend’s memory.
That’s the reason why “off the shelf” doesn’t sound suitable in this context.
So, on one level, my response to this is similar to the one I gave (a few years ago) [http://lesswrong.com/lw/qx/timeless_identity/9trc]… I agree that there’s a personal relationship with BtVS, just like there’s a personal relationship with my husband, that we’d want to preserve if we wanted to perfectly preserve me.
I was merely arguing that the bitlength of that personal information is much less than the actual information content of my brain, and there’s a great deal of compression leverage to be gained by taking the shared memories of BtVS out of both of your heads (and the other thousands of viewers) and replacing them with pointers to a common library representation of the show and then have your personal relationship refer to the common library representation rather than your private copy.
The personal relationship remains local and private, but it takes up way less space than your mind currently does.
That said… coming back to this conversation after three years, I’m finding I just care less and less about preserving whatever sense of self depends on these sorts of idiosyncratic judgments.
I mean, when you try to recall a BtVS episode, your memory is imperfect… if you watch it again, you’ll uncover all sorts of information you either forgot or remembered wrong. If I offered to give you perfect eideitic recall of BtVS—no distortion of your current facts about the goodness of it, except insofar as those facts turn out to be incompatible with an actual perception (e.g., you’d have changed your mind if you watched it again on TV, too) -- would you take it?
I would. I mean, ultimately, what does it matter if I replace my current vague memory of the soap opera Spike was obsessively watching with a more specific memory of its name and whatever else we learned about it? Yes, that vague memory is part of my unique identity, I guess, in that nobody else has quite exactly that vague memory… but so what? That’s not enough to make it worth preserving.
And for all I know, maybe you agree with me… maybe you don’t want to preserve your private “facts” about what kind of tie Giles was wearing when Angel tortured him, etc., but you draw the line at losing your private “facts” about how good the show was. Which is fine, you care about what you care about.
But if you told me right now that I’m actually an upload with reconstructed memories, and that there was a glitch such that my current “facts” about BTVS being a good show for its time is mis-reconstructed, and Dave before he died thought it was mediocre… well, so what?
I mean, before my stroke, I really disliked peppers. After my stroke, peppers tasted pretty good. This was startling, but it posed no sort of challenge to my sense of self.
Apparently (Me + likes peppers) ~= (Me + dislikes peppers) as far as I’m concerned.
I suspect there’s a million other things like that.
There’s a very wide range of possible minds I consider to preserve my identity; I’m not sure the majority of those emulate my prefrontal cortex significantly more closely than they emulate yours, and the majority of my memories are not shared by the majority of those minds.
Interesting. I wonder what you would consider a mind that preserves your identity. For example, I assume that the total of your posts online, plus whatever other information available without some hypothetical future brain scanner, all running as a process on some simulator, is probably not enough.
At one extreme, if I assume those posts are being used to create a me-simulation by me-simulation-creator that literally knows nothing else about humans, then I’m pretty confident that the result is nothing I would identify with. (I’m also pretty sure this scenario is internally inconsistent.)
At another extreme, if I assume the me-simulation-creator has access to a standard template for my general demographic and is just looking to customize that template sufficiently to pick out some subset of the volume of mindspace my sufficiently preserved identity defines… then maybe. I’d have to think a lot harder about what information is in my online posts and what information would plausibly be in such a template to even express a confidence interval about that.
That said, I’m certainly not comfortable treating the result of that process as preserving “me.”
Then again I’m also not comfortable treating the result of living a thousand years as preserving “me.”