I’m sort of dubious that this is even possible, and even if it were, it wouldn’t be me since it wouldn’t share continuity with me. It would likely, at best, just be an AI taught to pretend to be me.
What is ‘continuity’? Why is continuity important to you? What changes in anticipated experience would you expect if continuity was important to identity versus not important to identity, and why do you have a preference for one over the other?
Continuity of consciousness, and it’s important because without it, the me that results from the uploading isn’t this me, just an instance of me, which are not the same thing. The fact that there is a copy of me going around does not change the fact that this instance of me is dead.
Whenever you enter deep sleep you lose continuity of consciousness. Whence your intuition that continuity is important? Are you not impressed by timeless physics, nor Tegmark’s multiverses? In a spatially infinite universe with particles being indistinguishable and whole Hubble volumes also being indistinguishable (the standard cosmological position), in what sense are different ’you’s actually different people, even if there is no causal connection between them?
The fact that there is a copy of me going around does not change the fact that this instance of me is dead.
But does it even matter? If it looks like a you, thinks like a you, cares about all the same things you do, then your utility function should probably just consider it a you.
If you believe in Tegmark’s multiverse, what’s the point of uploading at all? You already inhabit an infinity of universes, all perfectly optimized for your happiness.
Personally I’m very inclined toward Tegmark’s position and I have no idea how to answer the above question.
Infinity, yes, but the relative sizes of infinity matter. There’s also an infinity of universes of infinite negative utility. Uploading yourself is increasing the relative measure of ‘good’ universes.
This is especially true if you think of ‘measure’ or ‘existence’ being assigned to computations via a universal prior of some kind as proposed by Schmidhuber and almost everyone else (and not a uniform prior as Tegmark tended towards for some reason). You want as large a swath of good utility in the ‘simple’ universes as possible, since those universes have the most measure and thus ‘count’ more according to what we might naively expect our utility functions to be.
Uploading in a simple universe would thus be worth significantly more utility than the infinity of universes all optimized for your happiness.
That said, it’s likely that our intuitions about this are all really confused: UDT is the current approach to reasoning about these issues, and I’m not fit to explain the intuitions or implications of UDT. Wei? Nesov? Anyone like to point out how all of this works, and how anthropics gets dissolved in the meantime?
In any case, you don’t lose continuity of consciousness when asleep; your brain keeps ticking away in the background as it does its maintenance routines. Never heard of timeless physics or Tegmark’s multiverses; looking at the links, though, I don’t really see the relevance of them. I do in fact believe in the many-worlds theory of quantum physics; that there are a nigh-infinite copies of me does not mean that this particular instance of me is unimportant to me (and, of course, the a good chunk of those other mes likely feel the same).
If someone built a teleporter that created a copy of me, said copy of me would be an instance of the class of persons who refer to themselves as “nick012000” on the internet; however, creating another instance of said class does not mean that it is the same as another instance of said class. To use a programming metaphor, it’d be like saying that “Nick012000 nick1 = new Nick012000; Nick012000 nick2 = (Nick012000) nick1.clone();” produces one variable. It doesn’t; it produces two.
To continue the programming metaphor, I also wouldn’t join into a hivemind, since that would turn that particular instance of the Nick012000 class into just a data field in that instance of the Hivemind class, but I would be okay with creating a hivemind with multiple blank bodies with my mind written onto them, since that would just be like running “Nick012000 nick1 = new Nick012000; Nick012000 nick2=nick1;”, and both variable names refer to the same object.
And, yes, it would matter to my utility function, since my utility function gives a strong positive weight to the continued existence of both the class of persons who refer to themselves as “nick012000” on the Internet, and to the particular instance of said class that is evaluating and executing said utility function.
I refuse to tell you! Just kidding. You preface a line or block of text with the ‘>’ symbol followed by a space. You can click the little green help button on the bottom right of the comment window to see other kinds of formatting (it should really be called something else, I know).
Never heard of timeless physics or Tegmark’s multiverses; looking at the links, though, I don’t really see the relevance of them.
I highly recommend reading Tegmark’s popular science paper on multiverses, it’s an excellent example of clear and concise science writing.
I think I understand your position better now, thanks for clarifying.
I refuse to tell you! Just kidding. You preface a line or block of text with the ‘>’ symbol followed by a space. You can click the little green help button on the bottom right of the comment window to see other kinds of formatting (it should really be called something else, I know).
Thanks.
I highly recommend reading Tegmark’s popular science paper on multiverses, it’s an excellent example of clear and concise science writing.
I’ll probably do so, once I have the time. I’m procrastinating from doing university stuff at the moment.
I think I understand your position better now, thanks for clarifying.
No worries! I think I might have editted it after you posted, though.
Much more plausible is the notion that you could modify a “standard median mind” to fit someone’s writing. But I suspect that most of the workings of such a creation would be from the standard model than the writings, and also that this is not what people have in mind as far as “writing oneself into the future” goes.
I agree. I don’t see how even an FAI could reproduce a model of your brain that is significantly more accurate than a slightly modified standard median mind. Heck, even if an FAI had some parts of your brain preserved and some of your writings (e.g. email) I’m not sure it could reproduce the rest of you with accuracy.
I think this is one of those domains where structural uncertainty plays a large part. If you’re talking about a Bayesian superintelligence operating at the physical limits of computation… I’d feel rather uneasy making speculations as to what limits it could possibly have. In a Tegmark ensemble universe, you get possibilities like ‘hacking out of the matrix’ or acausal trade or similarly AGI meta-golden rule cooperative optimization, and that’s some seriously powerful stuff.
I’m sort of dubious that this is even possible, and even if it were, it wouldn’t be me since it wouldn’t share continuity with me. It would likely, at best, just be an AI taught to pretend to be me.
What is ‘continuity’? Why is continuity important to you? What changes in anticipated experience would you expect if continuity was important to identity versus not important to identity, and why do you have a preference for one over the other?
Continuity of consciousness, and it’s important because without it, the me that results from the uploading isn’t this me, just an instance of me, which are not the same thing. The fact that there is a copy of me going around does not change the fact that this instance of me is dead.
Whenever you enter deep sleep you lose continuity of consciousness. Whence your intuition that continuity is important? Are you not impressed by timeless physics, nor Tegmark’s multiverses? In a spatially infinite universe with particles being indistinguishable and whole Hubble volumes also being indistinguishable (the standard cosmological position), in what sense are different ’you’s actually different people, even if there is no causal connection between them?
But does it even matter? If it looks like a you, thinks like a you, cares about all the same things you do, then your utility function should probably just consider it a you.
If you believe in Tegmark’s multiverse, what’s the point of uploading at all? You already inhabit an infinity of universes, all perfectly optimized for your happiness.
Personally I’m very inclined toward Tegmark’s position and I have no idea how to answer the above question.
Infinity, yes, but the relative sizes of infinity matter. There’s also an infinity of universes of infinite negative utility. Uploading yourself is increasing the relative measure of ‘good’ universes.
This is especially true if you think of ‘measure’ or ‘existence’ being assigned to computations via a universal prior of some kind as proposed by Schmidhuber and almost everyone else (and not a uniform prior as Tegmark tended towards for some reason). You want as large a swath of good utility in the ‘simple’ universes as possible, since those universes have the most measure and thus ‘count’ more according to what we might naively expect our utility functions to be.
Uploading in a simple universe would thus be worth significantly more utility than the infinity of universes all optimized for your happiness.
That said, it’s likely that our intuitions about this are all really confused: UDT is the current approach to reasoning about these issues, and I’m not fit to explain the intuitions or implications of UDT. Wei? Nesov? Anyone like to point out how all of this works, and how anthropics gets dissolved in the meantime?
How do you do that quote thing, anyway?
In any case, you don’t lose continuity of consciousness when asleep; your brain keeps ticking away in the background as it does its maintenance routines. Never heard of timeless physics or Tegmark’s multiverses; looking at the links, though, I don’t really see the relevance of them. I do in fact believe in the many-worlds theory of quantum physics; that there are a nigh-infinite copies of me does not mean that this particular instance of me is unimportant to me (and, of course, the a good chunk of those other mes likely feel the same).
If someone built a teleporter that created a copy of me, said copy of me would be an instance of the class of persons who refer to themselves as “nick012000” on the internet; however, creating another instance of said class does not mean that it is the same as another instance of said class. To use a programming metaphor, it’d be like saying that “Nick012000 nick1 = new Nick012000; Nick012000 nick2 = (Nick012000) nick1.clone();” produces one variable. It doesn’t; it produces two.
To continue the programming metaphor, I also wouldn’t join into a hivemind, since that would turn that particular instance of the Nick012000 class into just a data field in that instance of the Hivemind class, but I would be okay with creating a hivemind with multiple blank bodies with my mind written onto them, since that would just be like running “Nick012000 nick1 = new Nick012000; Nick012000 nick2=nick1;”, and both variable names refer to the same object.
And, yes, it would matter to my utility function, since my utility function gives a strong positive weight to the continued existence of both the class of persons who refer to themselves as “nick012000” on the Internet, and to the particular instance of said class that is evaluating and executing said utility function.
I refuse to tell you! Just kidding. You preface a line or block of text with the ‘>’ symbol followed by a space. You can click the little green help button on the bottom right of the comment window to see other kinds of formatting (it should really be called something else, I know).
I highly recommend reading Tegmark’s popular science paper on multiverses, it’s an excellent example of clear and concise science writing.
I think I understand your position better now, thanks for clarifying.
Thanks.
I’ll probably do so, once I have the time. I’m procrastinating from doing university stuff at the moment.
No worries! I think I might have editted it after you posted, though.
I agree—the whole idea of writing oneself into the future seems extremely implausible, especially using something like email.
Much more plausible is the notion that you could modify a “standard median mind” to fit someone’s writing. But I suspect that most of the workings of such a creation would be from the standard model than the writings, and also that this is not what people have in mind as far as “writing oneself into the future” goes.
I agree. I don’t see how even an FAI could reproduce a model of your brain that is significantly more accurate than a slightly modified standard median mind. Heck, even if an FAI had some parts of your brain preserved and some of your writings (e.g. email) I’m not sure it could reproduce the rest of you with accuracy.
I think this is one of those domains where structural uncertainty plays a large part. If you’re talking about a Bayesian superintelligence operating at the physical limits of computation… I’d feel rather uneasy making speculations as to what limits it could possibly have. In a Tegmark ensemble universe, you get possibilities like ‘hacking out of the matrix’ or acausal trade or similarly AGI meta-golden rule cooperative optimization, and that’s some seriously powerful stuff.
What do you mean by “continuity”, and why is it important to identity?