It’s also not clear that one could tell whether it failed. That is, OK, it processes the scans, interpolates over the gaps, and a person pops out the other end who believes himself to remember being me. Yay? Maybe.
Then again, it’s not clear to me that I ought to care about the difference.
It’s also not clear that one could tell whether it failed.
If the superintelligence does the same kind of coarse-grained scan to living humans and successfully copies/recreates them from that information alone, there would every reason to think the process would work just as well with dead humans like you, right?
Then again, it’s not clear to me that I ought to care about the difference.
Well, if you care about living, rather than about somebody similar to you that wrongly believes to be you, you definitely should care about the difference.
I care about living (usually), but it’s not clear to me that what I care about when I care about living is absent in the “failed” scenario.
As far as I can tell, “being me” just isn’t all that precisely defined in the first place; it describes a wide range of possible conditions. Which seems to allow for the possibility of two entities A and B existing at some future time such that A and B are different, but both A and B satisfy the condition of being me.
I agree, though, that if A is the result of my body traveling through time in the conventional manner, and B is the result of some other process, and A and B are different, it is conventional to say that A is really me and B is not. It’s just that this strikes me as a socially constructed truth more than an empirically observed one.
I also agree that the test you describe is compelling evidence that the copy/recreation process is as reliable a self-preserver as anything could be.
It should be possible to check for corruption in the process by having the AGI not use some known information in the reconstruction, then asking the reconstruct to answer questions with known answers.
(For example, the AGI could not use the (known, from records) birthdate of the person during reconstruction; afterwards, if the reconstruct doesn’t remember their correct birthdate, that would be strong evidence that the process had failed. Given a sufficiently large number of these tests, the superintelligence could verify with reasonable accuracy the fidelity of the reconstruction.)
It’s also not clear that one could tell whether it failed. That is, OK, it processes the scans, interpolates over the gaps, and a person pops out the other end who believes himself to remember being me. Yay? Maybe.
Then again, it’s not clear to me that I ought to care about the difference.
If the superintelligence does the same kind of coarse-grained scan to living humans and successfully copies/recreates them from that information alone, there would every reason to think the process would work just as well with dead humans like you, right?
Well, if you care about living, rather than about somebody similar to you that wrongly believes to be you, you definitely should care about the difference.
I care about living (usually), but it’s not clear to me that what I care about when I care about living is absent in the “failed” scenario.
As far as I can tell, “being me” just isn’t all that precisely defined in the first place; it describes a wide range of possible conditions. Which seems to allow for the possibility of two entities A and B existing at some future time such that A and B are different, but both A and B satisfy the condition of being me.
I agree, though, that if A is the result of my body traveling through time in the conventional manner, and B is the result of some other process, and A and B are different, it is conventional to say that A is really me and B is not. It’s just that this strikes me as a socially constructed truth more than an empirically observed one.
I also agree that the test you describe is compelling evidence that the copy/recreation process is as reliable a self-preserver as anything could be.
It should be possible to check for corruption in the process by having the AGI not use some known information in the reconstruction, then asking the reconstruct to answer questions with known answers.
(For example, the AGI could not use the (known, from records) birthdate of the person during reconstruction; afterwards, if the reconstruct doesn’t remember their correct birthdate, that would be strong evidence that the process had failed. Given a sufficiently large number of these tests, the superintelligence could verify with reasonable accuracy the fidelity of the reconstruction.)