I take it you’re assuming that information about my husband, and about my relationship to my husband, isn’t in the encyclopedia module along with information about mice and omelettes and your relationship to your wife.
If that’s true, then sure, I’d prefer not to lose that information.
Well...yeah, I was. I thought the whole idea of having an encyclopedia was to eliminate redundancy through standardization of the parts of the brain that were not important for individuality?
If your husband and my husband, your omelette and my omelette, are all stored in the encyclopedia, it wouldn’t be a “off-the-shelf encyclopedia module” anymore. It would be an index containing individual people’s non-episodic knowledge. At that point, it’s just an index of partial uploads. We can’t standardize that encyclopedia to everyone: If the the thing that stores your omelette and your husband went around viewing my episodic reel and knowing all the personal stuff about my omelette and husband...that would be weird and the resulting being would be very confused (let alone if the entire human race was in there—I’m not sure how that would even work).
(Also, going back into the technical stuff, there may or may not be a solid dividing line between very old episodic memory and non-episodoc memory
Sure, if your omelette and my omelette are so distinct that there is no common data structure that can serve as a referent for both, and ditto for all the other people in the world, then the whole idea of an encyclopedia falls apart. But that doesn’t seem terribly likely to me.
Your concept of an omelette probably isn’t exactly isomorphic to mine, but there’s probably a parametrizable omelette data structure we can construct that, along with a handful of parameter settings for each individual, can capture everyone’s omelette. The parameter settings go in the representation of the individual; the omelette data structure goes in the encyclopedia.
And, in addition, there’s a bunch of individualizing episodic memory on top of that… memories of cooking particular omelettes, of learning to cook an omelette, of learning particular recipes, of that time what ought to have been an omelette turned into a black smear on the pan, etc. And each of those episodic memories refers to the shared omelette data structure, but is stored with and is unique to the uploaded agent. (Maybe. It may turn out that our individual episodic memories have a lot in common as well, such that we can store a standard lifetime’s memories in the shared encyclopedia and just store a few million bits of parameter settings in each individual profile. I suspect we overestimate how unique our personal narratives are, honestly.)
Similarly, it may be that our relationships with our husbands are so distinct that there is no common data structure that can serve as a referent for both. But that doesn’t seem terribly likely to me. Your relationship with your husband isn’t exactly isomorphic to mine, of course, but it can likely similarly be captured by a common parameterizable relationship-to-husband data structure.
As for the actual individual who happens to be my husband, well, the majority of the information about him is common to all kinds of relationships with any number of people. He is his father’s son and his stepmother’s stepson and my mom’s son-in-law and so on and so forth. And, sure, each of those people knows different things, but they know those things about the same person; there is a central core. That core goes in the encyclopedia, and pointers to what subset each person knows about him goes in their individual profiles (along with their personal experiences and whatever idiosyncratic beliefs they have about him).
So, yes, I would say that your husband and my husband and your omelette and my omelette are all stored in the encyclopedia. You can call that an index of partial uploads if you like, but it fails to incorporate whatever additional computations that create first-person experience. It’s just a passive data structure.
Incidentally and unrelatedly, I’m not nearly as committed as you sound to preserving our current ignorance of one another’s perspective in this new architecture.
I’m really skeptical that parametric functions which vary on dimensions concerning omelets (Egg species? Color? ingredients? How does this even work?) are a more efficient or more accurate way of preserving what our wetware encode when compared to simulating the neural networks devoted dealing with omelettes. I wouldn’t even know how to start working on the problem mapping a conceptual representation of an omelette into parametric functions (unless we’re just using the parametric functions to model the properties of individual neurons—that’s fine).
Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
Incidentally and unrelatedly, I’m not nearly as committed as you sound to preserving our current ignorance of one another’s perspective in this new architecture.
I was more worried that it might break stuff (as in, resulting beings would need to be built quite differently in order to function) if one-another’s perspectives would overlap. Also, that brings us back to the original question I was raising about living forever—what exactly is it that we value and want to preserve?
Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
Not really. If I were serious about implementing this, I would start collecting distinct instances of omelette-concepts and analyzing them for variation, but I’m not going to do that. My expectation is that if I did, the most useful dimensions of variability would not map to any attributes that we would ordinarily think of or have English words for.
Perhaps what I have in mind can be said more clearly this way: there’s a certain amount of information that picks out the space of all human omelette-concepts from the space of all possible concepts… call that bitstring S1. There’s a certain amount of information that picks out the space of my omelette-concept from the space of all human omelette-concepts… call that bitstring S2.
S2 is much, much, shorter than S1.
It’s inefficient to have 7 billion human minds each of which is taking up valuable bits storing 7 billion copies of S1 along with their individual S2s. Why in the world would we do that, positing an architecture that didn’t physically require it? Run a bloody compression algorithm, store S1 somewhere, have each human mind refer to it.
I have no idea what S1 or S2 are.
And I don’t expect that they’re expressible in words, any more than I can express which pieces of a movie are stored as indexed substrings… it’s not like MPEG compression of a movie of an auto race creates an indexed “car” data structure with parameters representing color, make, model, etc. It just identifies repeated substrings and indexes them, and takes advantage of the fact that sequential frames share many substrings in common if properly parsed.
But I’m committed enough to a computational model of human concept storage that I believe they exist. (Of course, it’s possible that our concept-space of an omelette simply can’t be picked out by a bit-string, but I can’t see why I should take that possibility seriously.)
Oh, and agreed that we would change if we were capable of sharing one another’s perspectives. I’m not particularly interested in preserving my current cognitive isolation from other humans, though… I value it, but I value it less than I value the ability to easily share perspectives, and they seem to be opposed values.
My non-episodic memory contains the “facts” that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren’t an interesting band. My boyfriend’s non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd’s music is trancendentally good).
Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were in my non-episodic memory. More than that, I would also lose my sense of self if I gained contradictory memories. I would need to have my non-episodic memories and not have the facts from my boyfriend’s memory.
That’s the reason why “off the shelf” doesn’t sound suitable in this context.
So, on one level, my response to this is similar to the one I gave (a few years ago) [http://lesswrong.com/lw/qx/timeless_identity/9trc]… I agree that there’s a personal relationship with BtVS, just like there’s a personal relationship with my husband, that we’d want to preserve if we wanted to perfectly preserve me.
I was merely arguing that the bitlength of that personal information is much less than the actual information content of my brain, and there’s a great deal of compression leverage to be gained by taking the shared memories of BtVS out of both of your heads (and the other thousands of viewers) and replacing them with pointers to a common library representation of the show and then have your personal relationship refer to the common library representation rather than your private copy.
The personal relationship remains local and private, but it takes up way less space than your mind currently does.
That said… coming back to this conversation after three years, I’m finding I just care less and less about preserving whatever sense of self depends on these sorts of idiosyncratic judgments.
I mean, when you try to recall a BtVS episode, your memory is imperfect… if you watch it again, you’ll uncover all sorts of information you either forgot or remembered wrong. If I offered to give you perfect eideitic recall of BtVS—no distortion of your current facts about the goodness of it, except insofar as those facts turn out to be incompatible with an actual perception (e.g., you’d have changed your mind if you watched it again on TV, too) -- would you take it?
I would. I mean, ultimately, what does it matter if I replace my current vague memory of the soap opera Spike was obsessively watching with a more specific memory of its name and whatever else we learned about it? Yes, that vague memory is part of my unique identity, I guess, in that nobody else has quite exactly that vague memory… but so what? That’s not enough to make it worth preserving.
And for all I know, maybe you agree with me… maybe you don’t want to preserve your private “facts” about what kind of tie Giles was wearing when Angel tortured him, etc., but you draw the line at losing your private “facts” about how good the show was. Which is fine, you care about what you care about.
But if you told me right now that I’m actually an upload with reconstructed memories, and that there was a glitch such that my current “facts” about BTVS being a good show for its time is mis-reconstructed, and Dave before he died thought it was mediocre… well, so what?
I mean, before my stroke, I really disliked peppers. After my stroke, peppers tasted pretty good. This was startling, but it posed no sort of challenge to my sense of self.
Apparently (Me + likes peppers) ~= (Me + dislikes peppers) as far as I’m concerned.
I suspect there’s a million other things like that.
I take it you’re assuming that information about my husband, and about my relationship to my husband, isn’t in the encyclopedia module along with information about mice and omelettes and your relationship to your wife.
If that’s true, then sure, I’d prefer not to lose that information.
Well...yeah, I was. I thought the whole idea of having an encyclopedia was to eliminate redundancy through standardization of the parts of the brain that were not important for individuality?
If your husband and my husband, your omelette and my omelette, are all stored in the encyclopedia, it wouldn’t be a “off-the-shelf encyclopedia module” anymore. It would be an index containing individual people’s non-episodic knowledge. At that point, it’s just an index of partial uploads. We can’t standardize that encyclopedia to everyone: If the the thing that stores your omelette and your husband went around viewing my episodic reel and knowing all the personal stuff about my omelette and husband...that would be weird and the resulting being would be very confused (let alone if the entire human race was in there—I’m not sure how that would even work).
(Also, going back into the technical stuff, there may or may not be a solid dividing line between very old episodic memory and non-episodoc memory
Sure, if your omelette and my omelette are so distinct that there is no common data structure that can serve as a referent for both, and ditto for all the other people in the world, then the whole idea of an encyclopedia falls apart. But that doesn’t seem terribly likely to me.
Your concept of an omelette probably isn’t exactly isomorphic to mine, but there’s probably a parametrizable omelette data structure we can construct that, along with a handful of parameter settings for each individual, can capture everyone’s omelette. The parameter settings go in the representation of the individual; the omelette data structure goes in the encyclopedia.
And, in addition, there’s a bunch of individualizing episodic memory on top of that… memories of cooking particular omelettes, of learning to cook an omelette, of learning particular recipes, of that time what ought to have been an omelette turned into a black smear on the pan, etc. And each of those episodic memories refers to the shared omelette data structure, but is stored with and is unique to the uploaded agent. (Maybe. It may turn out that our individual episodic memories have a lot in common as well, such that we can store a standard lifetime’s memories in the shared encyclopedia and just store a few million bits of parameter settings in each individual profile. I suspect we overestimate how unique our personal narratives are, honestly.)
Similarly, it may be that our relationships with our husbands are so distinct that there is no common data structure that can serve as a referent for both. But that doesn’t seem terribly likely to me. Your relationship with your husband isn’t exactly isomorphic to mine, of course, but it can likely similarly be captured by a common parameterizable relationship-to-husband data structure.
As for the actual individual who happens to be my husband, well, the majority of the information about him is common to all kinds of relationships with any number of people. He is his father’s son and his stepmother’s stepson and my mom’s son-in-law and so on and so forth. And, sure, each of those people knows different things, but they know those things about the same person; there is a central core. That core goes in the encyclopedia, and pointers to what subset each person knows about him goes in their individual profiles (along with their personal experiences and whatever idiosyncratic beliefs they have about him).
So, yes, I would say that your husband and my husband and your omelette and my omelette are all stored in the encyclopedia. You can call that an index of partial uploads if you like, but it fails to incorporate whatever additional computations that create first-person experience. It’s just a passive data structure.
Incidentally and unrelatedly, I’m not nearly as committed as you sound to preserving our current ignorance of one another’s perspective in this new architecture.
I’m really skeptical that parametric functions which vary on dimensions concerning omelets (Egg species? Color? ingredients? How does this even work?) are a more efficient or more accurate way of preserving what our wetware encode when compared to simulating the neural networks devoted dealing with omelettes. I wouldn’t even know how to start working on the problem mapping a conceptual representation of an omelette into parametric functions (unless we’re just using the parametric functions to model the properties of individual neurons—that’s fine).
Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
I was more worried that it might break stuff (as in, resulting beings would need to be built quite differently in order to function) if one-another’s perspectives would overlap. Also, that brings us back to the original question I was raising about living forever—what exactly is it that we value and want to preserve?
Not really. If I were serious about implementing this, I would start collecting distinct instances of omelette-concepts and analyzing them for variation, but I’m not going to do that. My expectation is that if I did, the most useful dimensions of variability would not map to any attributes that we would ordinarily think of or have English words for.
Perhaps what I have in mind can be said more clearly this way: there’s a certain amount of information that picks out the space of all human omelette-concepts from the space of all possible concepts… call that bitstring S1. There’s a certain amount of information that picks out the space of my omelette-concept from the space of all human omelette-concepts… call that bitstring S2.
S2 is much, much, shorter than S1.
It’s inefficient to have 7 billion human minds each of which is taking up valuable bits storing 7 billion copies of S1 along with their individual S2s. Why in the world would we do that, positing an architecture that didn’t physically require it? Run a bloody compression algorithm, store S1 somewhere, have each human mind refer to it.
I have no idea what S1 or S2 are.
And I don’t expect that they’re expressible in words, any more than I can express which pieces of a movie are stored as indexed substrings… it’s not like MPEG compression of a movie of an auto race creates an indexed “car” data structure with parameters representing color, make, model, etc. It just identifies repeated substrings and indexes them, and takes advantage of the fact that sequential frames share many substrings in common if properly parsed.
But I’m committed enough to a computational model of human concept storage that I believe they exist. (Of course, it’s possible that our concept-space of an omelette simply can’t be picked out by a bit-string, but I can’t see why I should take that possibility seriously.)
Oh, and agreed that we would change if we were capable of sharing one another’s perspectives.
I’m not particularly interested in preserving my current cognitive isolation from other humans, though… I value it, but I value it less than I value the ability to easily share perspectives, and they seem to be opposed values.
I think I’ve got a good response for this one.
My non-episodic memory contains the “facts” that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren’t an interesting band. My boyfriend’s non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd’s music is trancendentally good).
Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were in my non-episodic memory. More than that, I would also lose my sense of self if I gained contradictory memories. I would need to have my non-episodic memories and not have the facts from my boyfriend’s memory.
That’s the reason why “off the shelf” doesn’t sound suitable in this context.
So, on one level, my response to this is similar to the one I gave (a few years ago) [http://lesswrong.com/lw/qx/timeless_identity/9trc]… I agree that there’s a personal relationship with BtVS, just like there’s a personal relationship with my husband, that we’d want to preserve if we wanted to perfectly preserve me.
I was merely arguing that the bitlength of that personal information is much less than the actual information content of my brain, and there’s a great deal of compression leverage to be gained by taking the shared memories of BtVS out of both of your heads (and the other thousands of viewers) and replacing them with pointers to a common library representation of the show and then have your personal relationship refer to the common library representation rather than your private copy.
The personal relationship remains local and private, but it takes up way less space than your mind currently does.
That said… coming back to this conversation after three years, I’m finding I just care less and less about preserving whatever sense of self depends on these sorts of idiosyncratic judgments.
I mean, when you try to recall a BtVS episode, your memory is imperfect… if you watch it again, you’ll uncover all sorts of information you either forgot or remembered wrong. If I offered to give you perfect eideitic recall of BtVS—no distortion of your current facts about the goodness of it, except insofar as those facts turn out to be incompatible with an actual perception (e.g., you’d have changed your mind if you watched it again on TV, too) -- would you take it?
I would. I mean, ultimately, what does it matter if I replace my current vague memory of the soap opera Spike was obsessively watching with a more specific memory of its name and whatever else we learned about it? Yes, that vague memory is part of my unique identity, I guess, in that nobody else has quite exactly that vague memory… but so what? That’s not enough to make it worth preserving.
And for all I know, maybe you agree with me… maybe you don’t want to preserve your private “facts” about what kind of tie Giles was wearing when Angel tortured him, etc., but you draw the line at losing your private “facts” about how good the show was. Which is fine, you care about what you care about.
But if you told me right now that I’m actually an upload with reconstructed memories, and that there was a glitch such that my current “facts” about BTVS being a good show for its time is mis-reconstructed, and Dave before he died thought it was mediocre… well, so what?
I mean, before my stroke, I really disliked peppers. After my stroke, peppers tasted pretty good. This was startling, but it posed no sort of challenge to my sense of self.
Apparently (Me + likes peppers) ~= (Me + dislikes peppers) as far as I’m concerned.
I suspect there’s a million other things like that.