While I do take the position that there is unlikely to be any theoretical personhood-related reason uploads would be impossible, I certainly don’t take the position that verifying an upload is a solved problem, or even that it’s necessarily ever going to be feasible.
That said, consider the following hypothetical process:
You are hooked up to sensors monitoring all of your sensory input.
We scan you thoroughly.
You walk around for a year, interacting with the world normally, and we log data.
We scan you thoroughly.
We run your first scan through our simulation software, feeding it the year’s worth of data, and find everything matches up exactly (to some ridiculous tolerance) with your second scan.
Do you expect that there is a way in which you are sentient, in which your simulation could not be if you plugged it into (say) a robot body or virtual environment that would feed it new sensory data?
That is a very good response and my answer to you is:
I don’t know
AND
To me it doesn’t matter as I’m not for any kind of destructive scanning upload ever though I may consider slow augmentation as parts wear out.
But I’m not saying you’re wrong. I just don’t know and I don’t think it’s knowable.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
What about being non-destructively scanned so you can converse with something that may be a fast running simulation of yourself, or may be something using a fast-running simulation of you to determine what to say to manipulate you?
While I do take the position that there is unlikely to be any theoretical personhood-related reason uploads would be impossible, I certainly don’t take the position that verifying an upload is a solved problem, or even that it’s necessarily ever going to be feasible.
That said, consider the following hypothetical process:
You are hooked up to sensors monitoring all of your sensory input.
We scan you thoroughly.
You walk around for a year, interacting with the world normally, and we log data.
We scan you thoroughly.
We run your first scan through our simulation software, feeding it the year’s worth of data, and find everything matches up exactly (to some ridiculous tolerance) with your second scan.
Do you expect that there is a way in which you are sentient, in which your simulation could not be if you plugged it into (say) a robot body or virtual environment that would feed it new sensory data?
That is a very good response and my answer to you is:
I don’t know AND
To me it doesn’t matter as I’m not for any kind of destructive scanning upload ever though I may consider slow augmentation as parts wear out.
But I’m not saying you’re wrong. I just don’t know and I don’t think it’s knowable.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
What about being non-destructively scanned so you can converse with something that may be a fast running simulation of yourself, or may be something using a fast-running simulation of you to determine what to say to manipulate you?
Nice thought experiment.
No I probably would not consent to being non-destructively scanned so that my simulated version could be evilly manipulated.
Regardless of whether it’s sentient or not provably so.