There are choices other than obliterating or not obliterating the original. We could, say, build an artificial continuous path in the space of consciousnesses between the physical and the uploaded mind. Of course, by the time we are technically capable of establishing continuity this way, we will all realize that continuity is way overrated, and we will have much better things to do with our minds than simply uploading them.
Mitchell, how did you get from my short comment on continuity to the assumption that I believe in the most extreme version of Dust Theory? I do not. And I am not a platonist, either.
To humans, continuity is very important. I am a human, so it is very important to me. Please don’t kill me. To substrate-independent, self-modifying minds it is not important, unless we manually make them value it when we build those minds.
To humans, continuity is very important. I am a human, so it is very important to me.
Really? You think continuity between t and t + 1 is important ‘in itself’, even holding the endpoints fixed, and assuming that you are anaesthetised during that interval?
To me it seems completely obvious and trivial that continuity is irrelevant.
I agree with you, but I used the word continuity in a different sense. I have just looked up the Stanford page on Personal Identity, and I think I can clear up terminology. I think you talk about physical continuity, and I agree with you about its irrelevance to everyone, including humans. I talk about psychological continuity. I think it is similarly uncontroversial that it is important to humans. The more interesting part of my statement is that psychological continuity is not important (per se) to substrate-independent self-modifying agents.
I don’t think the important thing here is continuity. After all, a person can ‘continuously die’ from dementia or ‘discontinuously survive’ after brain surgery to remove a tumour. Surely what matters is the persistence of the information that ‘makes you who you are’ in some conscious mind somewhere.
The view which many people seem to hold but I regard as ‘obviously wrong’ is the one which believes in a thread of subjective identity, irreducible to the functional activity and information content of your mind, which might be ‘cut’ if you do something drastic like change substrate. That even if you (somehow) knew for sure that your copy would structurally isomorphic to and “Turing-test indistinguishable” from you, there would still an epiphenomenal ‘extra fact’ about whether the copy is ‘really you’.
The more interesting part of my statement is that psychological continuity is not important (per se) to substrate-independent self-modifying agents.
I think what underlies that ‘obviously wrong’ view in many cases is a recognition of the fact that in practice, we rely on continuity to establish identity.
A great many optical illusions and magic tricks depend on this: if entity A is here at T1 and entity B at T2, and I don’t notice any T1/T2 discontinuity, I’m likely to behave as though the same entity had been here throughout.
Of course, generalizing from those intuitions to a more fundamental notion of some kind of epiphenomenal identity is unjustified, as you say.
Then again, claiming that what makes me who I am is functional activity or information content is problematic, also. It isn’t clear, for example, that amnesia or brain damage makes me somebody else. Nor is it clear that if someone else is able to emulate me well enough to pass the equivalent of a Turing test, that developing that skill makes them me.
Mostly, I think is a composite notion, like . That is, we judge that identity is preserved by evaluating a close-enough match along many different axes, and there’s no single property or set of properties that is both necessary and sufficient to establish identity.
Then again, claiming that what makes me who I am is functional activity or information content is problematic, also. It isn’t clear, for example, that amnesia or brain damage makes me somebody else.
In case you thought I was implying that, let me clarify that the whole point is to deprecate binary oppositions such as “being someone else” vs “being the same person” and “still being oneself” vs “no longer existing”.
So of course it’s “not clear” that, say, frontal lobe damage leaves you the same person and it’s “not clear” that it leaves you as a different person.
Nor is it clear that if someone else is able to emulate me well enough to pass the equivalent of a Turing test, that developing that skill makes them me.
Only if you’re talking about identity in the loose, everyday sense, which like ‘furniture’ is a mishmash of many concepts. On the other hand, if you’re talking about whether the mental state of the copy is qualitatively identical to your own (or ‘as similar as makes no difference’), then I don’t think it’s remotely problematic to say that structural and functional isomorphism (or ‘as near as makes no difference’) guarantees this. Do you?
(This just boils down to “aren’t you a functionalist?”)
I think “as near as makes no difference” is not sufficiently well defined for the Turing-test-equivalent scenario I’m quoting. The question of “makes no difference to whom?” becomes important.
This is a problem for the traditional Turing test, as well… a great deal depends on who the auditor is; some people turn out to be surprisingly undiscriminating. (Or, well, it surprises me.)
But yes, if I don’t take the Turing test bit that I quoted literally, and instead think more abstractly about a sufficiently precise and reliable functional test, then I agree with you.
Actually, I don’t consider structural isomorphism necessary in and of itself; functional isomorphism is adequate for my purposes. (Though that said, I do think that an adequately functionally isomorphic system will tend to demonstrate a high level of structural isomorphism as well, although that’s not a well-thought-through assertion and my confidence in it is low).
I’m just not sure what such a test might comprise in practice. That is, if I’m in charge of QA for Upload, Inc. and it’s my job to make sure that the uploads we produce are adequately functionally isomorphic to the minds of the organic originals to avoid later lawsuits, it’s really not clear to me what tests I ought to be performing to ensure that.
I can’t think of any arguments objecting to the psychological discontinuity around uploading that don’t also apply to, say, the discontinuity of sleep. It’s trivially true that people find thinking hard about sort of discontinuity deeply uncomfortable, but it seems less likely that uploading has unique continuity problems associated with it and more likely that it’s weird enough to expose issues that exist in everyday life but have been glossed over by familiarity.
The question of how we should value our “copies in other worlds” is independent of the question of whether we ought to value “continuity in this world (and its ‘descendants’, if it has descendants)”.
Moreover, valuing ‘copies in other worlds’ doesn’t entail being indifferent to ‘this world’, especially as ‘this world’ is apparently the only one we can control (temporarily ignoring the subtleties of ‘ambient control’).
The question of how, if at all, we should value our “MWI brethren” (or “Tegmark brethren” for that matter) is independent of the question of whether we ought to value ‘continuity’.
In particular, one can deny the value of MWI brethren and of “continuity”, while still valuing the information content of one’s own mind.
There are choices other than obliterating or not obliterating the original. We could, say, build an artificial continuous path in the space of consciousnesses between the physical and the uploaded mind. Of course, by the time we are technically capable of establishing continuity this way, we will all realize that continuity is way overrated, and we will have much better things to do with our minds than simply uploading them.
So can I kill everyone on Earth now? Since you all exist elsewhere in the multiverse.
It would be awkward if we agreed to that and our copies in the multiverse agreed to your copies killing them as well. We’d feel quite foolish.
Mitchell, how did you get from my short comment on continuity to the assumption that I believe in the most extreme version of Dust Theory? I do not. And I am not a platonist, either.
To humans, continuity is very important. I am a human, so it is very important to me. Please don’t kill me. To substrate-independent, self-modifying minds it is not important, unless we manually make them value it when we build those minds.
Really? You think continuity between t and t + 1 is important ‘in itself’, even holding the endpoints fixed, and assuming that you are anaesthetised during that interval?
To me it seems completely obvious and trivial that continuity is irrelevant.
I agree with you, but I used the word continuity in a different sense. I have just looked up the Stanford page on Personal Identity, and I think I can clear up terminology. I think you talk about physical continuity, and I agree with you about its irrelevance to everyone, including humans. I talk about psychological continuity. I think it is similarly uncontroversial that it is important to humans. The more interesting part of my statement is that psychological continuity is not important (per se) to substrate-independent self-modifying agents.
I don’t think the important thing here is continuity. After all, a person can ‘continuously die’ from dementia or ‘discontinuously survive’ after brain surgery to remove a tumour. Surely what matters is the persistence of the information that ‘makes you who you are’ in some conscious mind somewhere.
The view which many people seem to hold but I regard as ‘obviously wrong’ is the one which believes in a thread of subjective identity, irreducible to the functional activity and information content of your mind, which might be ‘cut’ if you do something drastic like change substrate. That even if you (somehow) knew for sure that your copy would structurally isomorphic to and “Turing-test indistinguishable” from you, there would still an epiphenomenal ‘extra fact’ about whether the copy is ‘really you’.
That’s a good point.
I think what underlies that ‘obviously wrong’ view in many cases is a recognition of the fact that in practice, we rely on continuity to establish identity.
A great many optical illusions and magic tricks depend on this: if entity A is here at T1 and entity B at T2, and I don’t notice any T1/T2 discontinuity, I’m likely to behave as though the same entity had been here throughout.
Of course, generalizing from those intuitions to a more fundamental notion of some kind of epiphenomenal identity is unjustified, as you say.
Then again, claiming that what makes me who I am is functional activity or information content is problematic, also. It isn’t clear, for example, that amnesia or brain damage makes me somebody else. Nor is it clear that if someone else is able to emulate me well enough to pass the equivalent of a Turing test, that developing that skill makes them me.
Mostly, I think is a composite notion, like . That is, we judge that identity is preserved by evaluating a close-enough match along many different axes, and there’s no single property or set of properties that is both necessary and sufficient to establish identity.
I also don’t think it matters very much.
In case you thought I was implying that, let me clarify that the whole point is to deprecate binary oppositions such as “being someone else” vs “being the same person” and “still being oneself” vs “no longer existing”.
So of course it’s “not clear” that, say, frontal lobe damage leaves you the same person and it’s “not clear” that it leaves you as a different person.
Only if you’re talking about identity in the loose, everyday sense, which like ‘furniture’ is a mishmash of many concepts. On the other hand, if you’re talking about whether the mental state of the copy is qualitatively identical to your own (or ‘as similar as makes no difference’), then I don’t think it’s remotely problematic to say that structural and functional isomorphism (or ‘as near as makes no difference’) guarantees this. Do you?
(This just boils down to “aren’t you a functionalist?”)
I think “as near as makes no difference” is not sufficiently well defined for the Turing-test-equivalent scenario I’m quoting. The question of “makes no difference to whom?” becomes important.
This is a problem for the traditional Turing test, as well… a great deal depends on who the auditor is; some people turn out to be surprisingly undiscriminating. (Or, well, it surprises me.)
But yes, if I don’t take the Turing test bit that I quoted literally, and instead think more abstractly about a sufficiently precise and reliable functional test, then I agree with you.
Actually, I don’t consider structural isomorphism necessary in and of itself; functional isomorphism is adequate for my purposes. (Though that said, I do think that an adequately functionally isomorphic system will tend to demonstrate a high level of structural isomorphism as well, although that’s not a well-thought-through assertion and my confidence in it is low).
I’m just not sure what such a test might comprise in practice. That is, if I’m in charge of QA for Upload, Inc. and it’s my job to make sure that the uploads we produce are adequately functionally isomorphic to the minds of the organic originals to avoid later lawsuits, it’s really not clear to me what tests I ought to be performing to ensure that.
I can’t think of any arguments objecting to the psychological discontinuity around uploading that don’t also apply to, say, the discontinuity of sleep. It’s trivially true that people find thinking hard about sort of discontinuity deeply uncomfortable, but it seems less likely that uploading has unique continuity problems associated with it and more likely that it’s weird enough to expose issues that exist in everyday life but have been glossed over by familiarity.
The question of how we should value our “copies in other worlds” is independent of the question of whether we ought to value “continuity in this world (and its ‘descendants’, if it has descendants)”.
Moreover, valuing ‘copies in other worlds’ doesn’t entail being indifferent to ‘this world’, especially as ‘this world’ is apparently the only one we can control (temporarily ignoring the subtleties of ‘ambient control’).
overrated is not equal to no value. Theres still a bit of a burden of proof on the multiverse as well
The question of how, if at all, we should value our “MWI brethren” (or “Tegmark brethren” for that matter) is independent of the question of whether we ought to value ‘continuity’.
In particular, one can deny the value of MWI brethren and of “continuity”, while still valuing the information content of one’s own mind.