Let’s start by setting aside the whole mind-uploading problem, and look at something more prosaic: what makes “me” at noon today the same as “me” at 3:00 am one year ago? In fact, let’s set aside what makes this true from my perspective; what makes you think these two bags of chemicals are the “same person”? When you see your mother, and then see her again later on, why do you think of her as the same person?
This is basically the same problem as Pointing to a Flower, except we’ve dragged in a bunch of new intuitions by making it about humans instead of flowers. (It’s also the same question Yudkowsky uses in his post on cryonics in the sequences, although I can’t find a link at the moment.)
The answer from the flower post does a fine job of saying what makes me the same organism as before: draw a boundary around my body in spacetime, it’s a local minimum in terms of summary data required to make predictions far away. Nontrivial, but quantifiable.
But for things like mind uploading, we want to go further than that. Sure, we could simulate my entire body, but it seems like what makes “me” doesn’t require all that. After all, I’m still me if I lose all my limbs and torso and get attached to some sci-fi life support machine. “Me” apparently does not just mean my body. In practice, I think humans use “me”, “you”, etc with several different referents depending on context, and the body is one of them. The referent relevant for mind-uploading purposes is, presumably, the mind—whatever that means.
There’s a few more steps before I’m ready to tackle the referent of “my mind”, but I’m pretty confident that the first step is basically the same as for the flower, and I’m also pretty confident that it is crisply quantifiable.
As to the connection with pattern matching, I’m pretty sure that the flower-approach is roughly equivalent to what a Bayesian learner would learn by looking for patterns in data. But that’s a post for another time.
Thanks for the message, looking forward to that post in the future. My very limited knowledge on this subject matter tells me you’re wrong, but I’ll be reading your sequence on abstraction before I try to argue you. I’ll probably change my mind, but I’ll let you know if at the end of it I still disagree.
Let’s start by setting aside the whole mind-uploading problem, and look at something more prosaic: what makes “me” at noon today the same as “me” at 3:00 am one year ago? In fact, let’s set aside what makes this true from my perspective; what makes you think these two bags of chemicals are the “same person”? When you see your mother, and then see her again later on, why do you think of her as the same person?
This is basically the same problem as Pointing to a Flower, except we’ve dragged in a bunch of new intuitions by making it about humans instead of flowers. (It’s also the same question Yudkowsky uses in his post on cryonics in the sequences, although I can’t find a link at the moment.)
The answer from the flower post does a fine job of saying what makes me the same organism as before: draw a boundary around my body in spacetime, it’s a local minimum in terms of summary data required to make predictions far away. Nontrivial, but quantifiable.
But for things like mind uploading, we want to go further than that. Sure, we could simulate my entire body, but it seems like what makes “me” doesn’t require all that. After all, I’m still me if I lose all my limbs and torso and get attached to some sci-fi life support machine. “Me” apparently does not just mean my body. In practice, I think humans use “me”, “you”, etc with several different referents depending on context, and the body is one of them. The referent relevant for mind-uploading purposes is, presumably, the mind—whatever that means.
There’s a few more steps before I’m ready to tackle the referent of “my mind”, but I’m pretty confident that the first step is basically the same as for the flower, and I’m also pretty confident that it is crisply quantifiable.
As to the connection with pattern matching, I’m pretty sure that the flower-approach is roughly equivalent to what a Bayesian learner would learn by looking for patterns in data. But that’s a post for another time.
You may be thinking of “Timeless Identity”. Best wishes, the Less Wrong Reference Desk
Thanks for the message, looking forward to that post in the future. My very limited knowledge on this subject matter tells me you’re wrong, but I’ll be reading your sequence on abstraction before I try to argue you. I’ll probably change my mind, but I’ll let you know if at the end of it I still disagree.