1) If we are trying to upload (the context here, if you follow the thread up a bit), then we want the emulations to be alive in whatever senses it is important to us that we are presently alive.
2) If we are building a really powerful optimization process, we want it not to be alive in whatever senses make alive things morally relevant, or we have to consider its desires as well.
OK fair enough if you’re looking for uploads. Personally I don’t care as I take the position that the upload concept isn’t really me, it’s a simulated me in the same way that a “spirit version of me” i.e. soul isn’t really me either.
Please correct my logic if I’m wrong here: in order to take the position that an upload is provably you, the only feasible way to do the test is have other people verify that it’s you. The upload saying it’s you doesn’t cut it and neither does the upload just acting exactly like you cut it. In other words the test for whether an upload is really you doesn’t even require it to be really you just simulate you exactly. Which means that the upload doesn’t need to be sentient.
Please fill in the blanks in my understanding so I can get where you’re coming from (this is a request for information not sarcastic).
I endorse dthomas’ answer in the grandparent; we were talking about uploads.
I have no idea what to do with word “provably” here. It’s not clear to me that I’m provably me right now, or that I’ll be provably me when I wake up tomorrow morning. I don’t know how I would go about proving that I was me, as opposed to being someone else who used my body and acted just like me. I’m not sure the question even makes any sense.
To say that other people’s judgments on the matter define the issue is clearly insufficient. If you put X in a dark cave with no observers for a year, then if X is me then I’ve experienced a year of isolation and if X isn’t me then I haven’t experienced it and if X isn’t anyone then no one has experienced it. The difference between those scenarios does not depend on external observers; if you put me in a dark cave for a year with no observers, I have spent a year in a dark cave.
Mostly, I think that identity is a conceptual node that we attach to certain kinds of complex systems, because our brains are wired that way, but we can in principle decompose identity to component parts—shared memory, continuity of experience, various sorts of physical similarity, etc. -- without anything left over. If a system has all those component parts—it remembers what I remember, it remembers being me, it looks and acts like me, etc. -- then our brains will attach that conceptual node to that system, and we’ll agree that that system is me, and that’s all there is to say about that.
And if a system shares some but not all of those component parts, we may not agree whether that system is me, or we may not be sure if that system is me, or we may decide that it’s mostly me.
Personal identity is similar in this sense to national identity. We all agree that a child born to Spaniards and raised in Spain is Spanish, but is the child of a Spaniard and an Italian who was born in Barcelona and raised in Venice Spanish, or Italian, or neither, or both? There’s no way to study the child to answer that question, because the child’s national identity was never an attribute of the child in the first place.
While I do take the position that there is unlikely to be any theoretical personhood-related reason uploads would be impossible, I certainly don’t take the position that verifying an upload is a solved problem, or even that it’s necessarily ever going to be feasible.
That said, consider the following hypothetical process:
You are hooked up to sensors monitoring all of your sensory input.
We scan you thoroughly.
You walk around for a year, interacting with the world normally, and we log data.
We scan you thoroughly.
We run your first scan through our simulation software, feeding it the year’s worth of data, and find everything matches up exactly (to some ridiculous tolerance) with your second scan.
Do you expect that there is a way in which you are sentient, in which your simulation could not be if you plugged it into (say) a robot body or virtual environment that would feed it new sensory data?
That is a very good response and my answer to you is:
I don’t know
AND
To me it doesn’t matter as I’m not for any kind of destructive scanning upload ever though I may consider slow augmentation as parts wear out.
But I’m not saying you’re wrong. I just don’t know and I don’t think it’s knowable.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
What about being non-destructively scanned so you can converse with something that may be a fast running simulation of yourself, or may be something using a fast-running simulation of you to determine what to say to manipulate you?
It matters for two things:
1) If we are trying to upload (the context here, if you follow the thread up a bit), then we want the emulations to be alive in whatever senses it is important to us that we are presently alive.
2) If we are building a really powerful optimization process, we want it not to be alive in whatever senses make alive things morally relevant, or we have to consider its desires as well.
OK fair enough if you’re looking for uploads. Personally I don’t care as I take the position that the upload concept isn’t really me, it’s a simulated me in the same way that a “spirit version of me” i.e. soul isn’t really me either.
Please correct my logic if I’m wrong here: in order to take the position that an upload is provably you, the only feasible way to do the test is have other people verify that it’s you. The upload saying it’s you doesn’t cut it and neither does the upload just acting exactly like you cut it. In other words the test for whether an upload is really you doesn’t even require it to be really you just simulate you exactly. Which means that the upload doesn’t need to be sentient.
Please fill in the blanks in my understanding so I can get where you’re coming from (this is a request for information not sarcastic).
I endorse dthomas’ answer in the grandparent; we were talking about uploads.
I have no idea what to do with word “provably” here. It’s not clear to me that I’m provably me right now, or that I’ll be provably me when I wake up tomorrow morning. I don’t know how I would go about proving that I was me, as opposed to being someone else who used my body and acted just like me. I’m not sure the question even makes any sense.
To say that other people’s judgments on the matter define the issue is clearly insufficient. If you put X in a dark cave with no observers for a year, then if X is me then I’ve experienced a year of isolation and if X isn’t me then I haven’t experienced it and if X isn’t anyone then no one has experienced it. The difference between those scenarios does not depend on external observers; if you put me in a dark cave for a year with no observers, I have spent a year in a dark cave.
Mostly, I think that identity is a conceptual node that we attach to certain kinds of complex systems, because our brains are wired that way, but we can in principle decompose identity to component parts—shared memory, continuity of experience, various sorts of physical similarity, etc. -- without anything left over. If a system has all those component parts—it remembers what I remember, it remembers being me, it looks and acts like me, etc. -- then our brains will attach that conceptual node to that system, and we’ll agree that that system is me, and that’s all there is to say about that.
And if a system shares some but not all of those component parts, we may not agree whether that system is me, or we may not be sure if that system is me, or we may decide that it’s mostly me.
Personal identity is similar in this sense to national identity. We all agree that a child born to Spaniards and raised in Spain is Spanish, but is the child of a Spaniard and an Italian who was born in Barcelona and raised in Venice Spanish, or Italian, or neither, or both? There’s no way to study the child to answer that question, because the child’s national identity was never an attribute of the child in the first place.
While I do take the position that there is unlikely to be any theoretical personhood-related reason uploads would be impossible, I certainly don’t take the position that verifying an upload is a solved problem, or even that it’s necessarily ever going to be feasible.
That said, consider the following hypothetical process:
You are hooked up to sensors monitoring all of your sensory input.
We scan you thoroughly.
You walk around for a year, interacting with the world normally, and we log data.
We scan you thoroughly.
We run your first scan through our simulation software, feeding it the year’s worth of data, and find everything matches up exactly (to some ridiculous tolerance) with your second scan.
Do you expect that there is a way in which you are sentient, in which your simulation could not be if you plugged it into (say) a robot body or virtual environment that would feed it new sensory data?
That is a very good response and my answer to you is:
I don’t know AND
To me it doesn’t matter as I’m not for any kind of destructive scanning upload ever though I may consider slow augmentation as parts wear out.
But I’m not saying you’re wrong. I just don’t know and I don’t think it’s knowable.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
What about being non-destructively scanned so you can converse with something that may be a fast running simulation of yourself, or may be something using a fast-running simulation of you to determine what to say to manipulate you?
Nice thought experiment.
No I probably would not consent to being non-destructively scanned so that my simulated version could be evilly manipulated.
Regardless of whether it’s sentient or not provably so.