I’m still struggling with this. I’m fine with the notion that you could, in theory, teleport a copy of me across the universe and to that copy there would be a sense of continuity. But your essay didn’t convince me that the version of me entering the teleporter would feel that continuity. To make it explicit, say you get into that teleporter and due to a software bug it doesn’t “deconstruct” you up teleportation. Here you are on this end and the technician says “trust me, you were teleported”. He then explains that due to intergalactic law, two of you are not allowed to exist, so the version of you on this side of the teleporter must be euthanized. (a) would you be fine with this, since you know there is a copy of you on the other side? and (b) are you asserting that you have some sort of shared consciousness with the copy? To me it seems clear that while the copy would remember getting into the teleporter, the original version would have no notion of whether teleportation was successful or not.
The key to this koan (at least for me) is undoing the assumption that there can be only one of you. There’s one of you that steps in and one that steps out. And they’re the same you.
What I value about me is the pattern of beliefs, memories, and values. The other me has has an identical brain state, so all of those. It is simply another me. I care about the second one pretty much exactly as much as I care about the same pattern continuing in a more similar location and with more similar molecules instantiating the pattern. That’s because I care far less about where I am and which molecules I’m made of than the pattern of identity in my mind/brain.
The same as you can have two of anything else that’s close-enough-for-the-purpose. I can have two rocks if I don’t care about the difference in their molecular makeup. I can have two mes.
Yes, you have some sort of shared consciousness with the copy; it’s the same shared consciousness between the you of today and the you that wakes up tomorrow. It doesn’t imply sharing events that happen simultaneously or anything mystical about “sharing consciousness”.
That’s why I’d happily step into the destructive teleporter if I was certain the copy on the other side would have exactly my mind-pattern, including memories, beliefs, and values. That’s me.
There’s one of you that steps in and one that steps out. And they’re the same you.
[...]
That’s why I’d happily step into the destructive teleporter if I was certain the copy on the other side would have exactly my mind-pattern, including memories, beliefs, and values. That’s me.
These statements make the most sense only in the standard LW-computationalist frame, which reads to me as substantively anti-physicalist and mostly unreasonable to believe in, for reasons building off of what I sketched out in a comment to Ruby. But, in any case, I can concede it for now, if only for purposes of this conversation.
What I value about me is the pattern of beliefs, memories, and values.
The attempted mind-reading of others is (justifiably) seen as rude in conversations over the Internet, but I must nonetheless express very serious skepticism about this claim, as it’s currently written.
Humans don’t have our values written in Fortran on the inside of our skulls, we’re collections of atoms that only do agent-like things within a narrow band of temperatures and pressures. It’s not that there’s some pre-theoretic set of True Values hidden inside people and we’re merely having trouble getting to them—no, extracting any values at all from humans is a theory-laden act of inference, relying on choices like “which atoms exactly count as part of the person” and “what do you do if the person says different things at different times?”
I expanded upon some of these ideas in a rather long comment I wrote to Wei Dai on the question of values and the orthogonality thesis:
Whenever I see discourse about the values or preferences of beings embedded in a physical universe that goes beyond the boundaries of the domains (namely, low-specificity conversations dominated by intuition) in which such ultimately fake frameworks function reasonably well, I get nervous and confused. I get particularly nervous if the people participating in the discussions are not themselves confused about these matters [...]. Such conversations stretch our intuitive notions past their breaking point by trying to generalize them out of distribution without the appropriate level of rigor and care.
How can we use such large sample spaces when it becomes impossible for limited beings like humans or even AGI to differentiate between those outcomes and their associated events? After all, while we might want an AI to push the world towards a desirable state instead of just misleading us into thinking it has done so, how is it possible for humans (or any other cognitively limited agents) to assign a different value, and thus a different preference ranking, to outcomes that they (even in theory) cannot differentiate (either on the basis of sense data or through thought)?
Moreover, you are explicitly claiming that your values are not indexical, which is rather unlikely in its own right, conflicts very strongly with my intuition (and, I would expect, with that of the vast majority of “regular”, non-rationalist people), and certainly seems to disvalue (or even completely ignore) the relevance of continuous subjective experience. Put more clearly, if I were to be in such a spot, and one of my “copies” were told to choose between being tortured or having the other copy be tortured instead, it would certainly choose the latter option, and I suspect this to be the case for ~ every other person as well (with apologies for a slight generalization from one example).
In any case, the rather abstract “beliefs, memories and values” you solely purport to value fit the category of professed ego-syntonic morals much more so than the category of what actually motivates and generates human behavior, as Steven Byrnes explained in an expectedly outstanding way:
An important observation here is that professed goals and values, much more than actions, tend to be disproportionately determined by whether things are ego-syntonic or -dystonic. Consider: If I say something out loud (or to myself) (e.g. “I’m gonna quit smoking” or “I care about my family”), the actual immediate thought in my head was mainly “I’m going to perform this particular speech act”. It’s the valence of that thought which determines whether we speak those words or not. And the self-reflective aspects of that thought are very salient, because speaking entails thinking about how your words will be received by the listener. By contrast, the contents of that proclamation—actually quitting smoking, or actually caring about my family—are both less salient and less immediate, taking place in some indeterminate future (see time-discounting). So the net valence of the speech act probably contains a large valence contribution from the self-reflective aspects of quitting smoking, and a small valence contribution from the more direct sensory and other consequences of quitting smoking, or caring about my family. And this is true even if we are 100% sincere in our intention to follow through with what we say. (See also Approving reinforces low-effort behaviors, a blog post making a similar point as this paragraph.)
[...]
According to this definition, “values” are likely to consist of very nice-sounding, socially-approved, and ego-syntonic things like “taking care of my family and friends”, “making the world a better place”, and so on.
Also according to this definition, “values” can potentially have precious little influence on someone’s behavior. In this (extremely common) case, I would say “I guess this person’s desires are different from his values. Oh well, no surprise there.”
Indeed, I think it’s totally normal for someone whose “values” include “being a good friend” will actually be a bad friend. So does this “value” have any implications at all? Yes!! I would expect that, in this situation, the person would either feel bad about the fact that they were a bad friend, or deny that they were a bad friend, or fail to think about the question at all, or come up with some other excuse for their behavior. If none of those things happened, then (and only then) would I say that “being a good friend” is not in fact one of their “values”, and if they stated otherwise, then they were lying or confused.
Steve also argues, in my view correctly, that “all valence ultimately flows, directly or indirectly, from innate drives”, which are entirely centered on (indexical, selfish) subjective experience such as pain, hunger, status drive, emotions etc. I see no clear causal mechanism through which something like that could ever make a human (copy) stop valuing its qualia in favor of the abstract concepts you purport to defend.
Yes, you have some sort of shared consciousness with the copy; it’s the same shared consciousness between the you of today and the you that wakes up tomorrow. It doesn’t imply sharing events that happen simultaneously or anything mystical about “sharing consciousness”.
I don’t really buy this because I am unsure how to judge or conceptualize this shared consciousness across time. To sketch out some of my thoughts further, I’ll quote another part of my response to Wei Dai:
The feedback loops implicit in the structure of the brain cause reward and punishment signals to “release chemicals that induce the brain to rearrange itself” in a manner closely analogous to and clearly reminiscent of a continuous and (until death) never-ending micro-scale brain surgery. To be sure, barring serious brain trauma, these are typically small-scale changes, but they nevertheless fundamentally modify the connections in the brain and thus the computation it would produce in something like an emulated state (as a straightforward corollary, how would an em that does not “update” its brain chemistry the same way that a biological being does be “human” in any decision-relevant way?). We can think about a continuous personal identity through the lens of mutual information about memories, personalities etc, but our current understanding of these topics is vastly incomplete and inadequate, and in any case the naive (yet very widespread, even on LW) interpretation of “the utility function is not up for grabs” as meaning that terminal values cannot be changed (or even make sense as a coherent concept) seems totally wrong.
I don’t have time to respond to all of this. I don’t disagree with any particular claim you’ve made there. I value the continuity of experience as much as you; the experience of a pattern continuing down to the most minute detail is more continuous than when we fall asleep, have some half-conscious and fully unconscious states, and wake up as an approximate but less precise continuation of the mental pattern we were when we went to sleep.
The fine distinctions in beliefs and values don’t matter. I agree with all of your statements about the vagaries and confusions about beliefs and values, but they’re not relevant here. That perfectly duplicated pattern carries all of them, stated and unstated, complex and simple. Every memory. There’s nothing else to value, except for continuity in space and time. I’d rather be me waking up in Des Moines in a month than stay where I am and get brain damage (and loss of self) in one minute.. I confess that I don’t love going to sleep, but I assume that you also don’t consider it similar to death.
You’ve got a lot of questions to raise, but no apparent alternative. Your mind is a pattern. That pattern is instantiated in matter. Reproduce the matter, you’ve reproduced the mind. That’s not anti-physicalist, it’s just how physics of information processing works. The only alternative is positing a mind-pattern that’s not tightly connected to matter—but that helps explain nothing. The physical world works just fine for instantiating the information processing you need to create a mind that is self-aware and simulates its environment like humans seem to do.
I don’t disagree with anything you’ve said; it’s just not an alternative view. You’re fighting against the counterintuitive conclusion. Sure I’d rather have a different version of me be tortured; it’s slightly different. But I won’t be happy about it. And my intuition is still drawn toward continuity being important, even though my whole rational mind disagrees. I’ve been back and forth over this extensively, and the conclusion is always the same- ever since I got over the counter-intuitive nature of the plural I.
There are two conflicting strong intuitions. One has to give. Which one seems inarguable. Continuity of matter doesn’t matter; continuity of pattern does.
You’ve got a lot of questions to raise, but no apparent alternative.
Non computationalism physicalism is an alternative to either or both the computationalist theories. (That performing a certain class of computations is sufficient to be conscious in general, or that performing a specific one is sufficient to be a particular conscious individual. Computation as a theory of consciousness qua awareness isn’t known to be true, and even if it is assumed, it doesn’t directly give you a theory of personal identity).
The non existence, or incoherence, of personal identity is another. There doesn’t have to be an answer to “when is a mind me”.
Note that no one except andeslodes is arguing against copying. The issue is when a mind is me, the person typing this, not a copy-of-me.
Reproduce the matter, you’ve reproduced the mind.
Well, that’s only copying.
Consciousness, qua Awareness, and Personal Identity are easily confused, not least because both are often called “consciousness”.
A computational theory of consciousness is sometimes called on to solve the second problem, the problem of personal identity. But there is no strong reason to think a computational duplicate of you, actually is you, since there is no strong reason to think any other kind of duplicate is.
Qualitative identity is a relationship between two or more things that are identical in all their properties. Numerical identity is the relationship a thing has only to itself. The Olsen twins enjoy qualitative identity; Stephanie Germanota and Lady Gaga have numerical identity. The trick is to jump from qualitative identity to numerical identity, because the claim is that a computational duplicate of you, is you, the very same person.
Suppose you found out you had an identical twin. You would not consider them to be you yourself. Likewise for a biological clone. A computational duplicate would be lower resolution still, so why would it be you? The major problem is that you and your duplicate exist simultaneously in different places, which goes against the intuition that you are a unique individual.
You’re fighting against the counterintuitive conclusion. Sure I’d rather have a different version of me be tortured; it’s slightly different. But I won’t be happy about it. And my intuition is still drawn toward continuity being important, even though my whole rational mind disagrees. I’ve been back and forth over this extensively, and the conclusion is always the same- ever since I got over the counter-intuitive nature of the plural I
You don’t really believe in the plural I theory, or you would have a different and we to the torture question.
Non -computationalist physicalism doesn’t have to be the claim that material continuity matters , and pattern doesnt: it can be the claim that both do. So that you cease to be you if you are destructively cloned, and also if your mind is badly scrambled. No bullet biting about plural Is is required.
If you’re not arguing against a perfect copy being you, then I don’t understand your position, so much of what follows will probably miss the mark. I had written more but have to cut myself off since this discussion is taking time without having much odds of improving anyone’s epistemics noticably.
The Olson twins are do not at all have qualitative identity. They have different minds: sets of memories, beliefs, and values. So I just don’t know what your position is. You claim that there doesn’t need to be an answer; that seems false, as you could have to make decisions informed by your belief. You currently value your future self more than other people, so you act like you believe that’s you in a functional sense.
Are you the same person tomorrow? It’s not an identical pattern, but a continuation. I’m saying it’s pretty-much you because the elements you wouldn’t want changed about yourself are there.
If you value your body or your continuity over the continuity of your memories, beliefs, values, and the rest of your mind that’s fine, but the vast majority will disagree with you on consideration. Those things are what we mean by “me”.
I certainly do believe in the plural I (under the special cirrumstance I discussed); we must be understanding something differently in the torture question. I don’t have a preference pre-copy for who gets tortured; both identical future copies are me from my perspective before copying. Maybe you’re agreeing with that?
After copying, we’re immediately starting to diverge into two variants of me, and future experiences will not be shared between them.
I was addressing a perfect computational copy.
An imperfect but good computational copy is higher resolution, not lower, than a biological twin. It is orders of magnitude more similar to the pattern that makes your mind, even though it is less similar to the pattern that makes your body. What is writing your words is your mind, not your body, so when it says “I” it meets the mind.
Noncomputational physicalism sounds like it’s just confused. Physics performs computations and can’t be separated from doing that.
Dual aspect theory is incoherent because you can’t have our physics without doing computation that can create a being that claims and experiences consciousness like we do. Noncomputational physicalism sounds like the same thing.
I concede it’s possible that consciousness includes some magic nonphysical component (that’s not computation or pattern instantiated by physics as a pure result of how physics works). That could change my answer to when a mind is me. I don’t think that’s what you’re arguing for though.
I’ve got to park this here to get other things done. I’ll read any response but it might be a better use of time to restart the discussion more carefully—if you care.
I agree that this conversation, as currently started, is unlikely to lead to anything more productive. As such, I’ll keep my response here brief [1], in case you want to use it as a starting point if you ever intend for us to talk about it again.
Noncomputational physicalism sounds like it’s just confused. Physics performs computations and can’t be separated from doing that.
Dual aspect theory is incoherent because you can’t have our physics without doing computation that can create a being that claims and experiences consciousness like we do.
As I read these statements, they fail to contend with a rather basic map-territory distinction that lies at the core of “physics” and “computation.”
The basic concept of computation at issue here is a feature of the map you could use to approximate reality (i.e., the territory) . It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate [2]. This is because, in this restricted and epistemically hobbled ontology, what is given inordinate attention is the abstract classical computation performed by a particular subset of the brain’s electronic circuit. This is what makes it anti-physicalist, as I have explained:
As a general matter, accepting physicalism as correct would naturally lead one to the conclusion that what runs on top of the physical substrate works on the basis of… what is physically there (which, to the best of our current understanding, can be represented through Quantum Mechanical probability amplitudes), not what conclusions you draw from a mathematical model that abstracts away quantum randomness in favor of a classical picture, the entire brain structure in favor of (a slightly augmented version of) its connectome, and the entire chemical make-up of it in favor of its electrical connections.
To make it even more explicit, this interpretation of the computationalist perspective (that the quantum stuff doesn’t matter etc) was confirmed as accurate by its proponents.
So when you talk about a “pattern instantiated by physics as a pure result of how physics works”, you’re not pointing to anything meaningful in the territory, rather only something that makes sense in the particular ontology you have chosen to use to view it through, a frame that I have explained my skepticism of already.
Put differently, “computation” is not an ontologically primitive concept in reality-as-it-is, but only in mathematical approximations of it that make specific assumptions about what does and doesn’t exist. Those assumptions can be sometimes justified in terms of intuitive appeal, expediency of calculation etc, but reifying them as unchallengeable axioms of the universe rather than of your model of it is wrong.
The Olson twins are do not at all have qualitative identity.
Not 100% , but enough to illustrate the concept.
So I just don’t know what your position is.
I didn’t have to have a solution to point out the flaws in other solutions. My main point is that a no to soul- theory isn’t a yes to computationalism. Computationalism isn’t the only alternative, or the best.
You claim that there doesn’t need to be an answer;
Some problems are insoluble.
that seems false, as you could have to make decisions informed by your belief.
My belief isn’t necessarily the actually really answer ..is it? That’s basic rationality. You need beliefs to act...but beliefs aren’t necessarily true.
And I have no practical need for a theory that can answer puzzles about destructive teleportation and the like.
You currently value your future self more than other people, so you act like you believe that’s you in a functional sense.
Yes. That’s not an argument in favour of the contentious points, like computationalism and Plural Is. If I try to reverse the logic, and great everything I value as me, I get bizarre results...I am my dog, country, etc.
Are you the same person tomorrow? It’s not an identical pattern, but a continuation.
Tomorrow-me is a physical continuation , too.
I’m saying it’s pretty-much you because the elements you wouldn’t want changed about yourself are there.
If I accept that pattern is all that matters , I have to face counterintuitive consequences like Plural I’s.
If I accept that material continuity is all that matters, then I face other counterintuitive consequences, like having my connectome rewired.
Its an open philosophical problem. If there were an simple answer , it would have been answered long ago.
“Yer an algorithm, Arry” is a simple answer. Just not
good
If you value your body or your continuity over the continuity of your memories, beliefs, values, and the rest of your mind that’s fine,
Fortunately, it’s not an either-or choice.
I certainly do believe in the plural I (under the speciall cirrumstance I discussed); we must be understanding something differently in the torture question. I don’t have a preference pre-copy for who gets tortured; both identical future copies are me from my perspective before copying. Maybe you’re agreeing with that?
...and post copy I have a preference for the copy who isn’t me to be tortured. Which is to say that both copies say the same thing, which is to say that they are only copies. If they regarded themselves as numerically identical, the response “the other one!” would make no sense, and nor would the question. The questions presumes a lack of numerical identity, so how can it prove it?
I was addressing a perfect computational copy. An imperfect but good computational copy is higher resolution, not lower, than a biological twin. It is orders of magnitude more similar to the pattern that makes your mind, even though it is less similar to the pattern that makes your body.
You’re assuming pattern continuity matters more than material continuity. There’s no proof of that, and no proof that you have to make an either-or choice.
What is writing your words is your mind, not your body, so when it says “I” it meets the mind.
The abstract pattern can’t cause anything without the brain/body.
Noncomputational physicalism sounds like it’s just confused. Physics performs computations and can’t be separated from doing that.
Noncomputational physicalism isn’t the claim that computation never occurs. Its the claim that the computational abstraction doesn’t capture everything that’s relevant to consciousness/mind. Its not physically necessary that the computational abstraction captures all the causally relevant information, so it isn’t logically necessary, a fortiori.
Dual aspect theory is incoherent because you can’t have our physics without doing computation that can create a being that claims and experiences consciousness like we do.
Computation is a lossy , high level abstraction of a what a physical system does. It doesn’t fundamentally cause anything in itself.
Now, you can argue that a physical duplicate would make the same claims to be conscious without actually having consciousness, and that’s literally a p-zombie argument.
But we do have consciousness. The insight of DAT is that “reports of consciousness have a physical/computational basis” isn’t exclusive of “reports of consciousness are caused by consciousness”. You can have your cake and eat it!
Of course, the above is all about consciousness-qua-awareness , not consciousness qua personal identity.
I concede it’s possible that consciousness includes some magic nonphysical component (that’s not computation or pattern instantiated by physics as a pure result of how physics works).
If it’s physical, why call it magical?
It’s completely standard that all computations run on a substrate. If you want to say that all physics is computation, OK, but then all computation is physics. You then no longer have plural I’s, because physics doesn’t allow the selfsame object to have multiple instances.
Do you think a successful upload would say things like “I’m still me!” and think thoughts like “I’m so glad I payed extra to give myself cool virtual environment options”? That seems like an inevitability if the causal patterns of your mind were captured. And it would be tough to disagree with a thing claiming up and down it’s you, citing your most personal memories as evidence
It’s easy to disagree if there is another explanation, which there is: a functional duplicate will behave the same, because it’s a functional duplicate..whether it’s conscious of not, whether it’s you or not.
I disagree that your mind is “a pattern instantiated in matter.” Your mind is the matter. It’s precisely the assumption that the mind is separable from the matter that I would characterize as non-physicalist.
Terminology aside, I think if you examine this carefully it’s incoherent.
Do you think a successful upload would say things like “I’m still me!” and think thoughts like “I’m so glad I payed extra to give myself cool virtual environment options”? That seems like an inevitability if the causal patterns of your mind were captured. And it would be tough to disagree with a thing claiming up and down it’s you, citing your most personal memories as evidence.
A successful upload (assuming this is physically possible, which is not a settled question) would remember my same memories and have my same personality traits; however, that would not mean my mind had been unwound from the matter and transferred to it, but rather that my mind had been duplicated in silico.
Yes, it’s a duplicate which will also be you from your current perspective. If you duplicated your car tomorrow you’d have two cars; if you duplicate your mind tomorrow you need to plan on there being two yous tomorrow.
No; it will remember my life but I will not go on to experience its experiences. (Similarly, if I duplicate my car and then destroy the original, its engine does not continue on to fire in the duplicate; the duplicate has an engine of its own, which may be physically identical but is certainly not the same object).
Okay, so would you say that the you of today goes on to experience the you-of-tomorrow’s experiences? I think the relationship is the same to a perfect duplicate. The duplicate is no less you than the you of tomorrow is. They are separate people from their perspective after duplication, but almost-the-same-person to a much greater degree than twins.
You (pre-duplication) will go on to have two separate sets of experiences. Both are you from your current perspective before duplication; you should give them equal consideration in your decisions, since the causal relationship is the same in both ways between you and the duplicate as to your self of tomorrow.
Consider the case where the duplicate is teleported to your location and vice versa during duplication. Then just location swapped while you’re asleep. And consider that you wouldn’t care a whit if every molecule of your body was Theseus-swapped one by one for identical molecules in identical locations and roles while you slept.
No; I, pre-duplication, exist in a single body, and will not post-duplication have my consciousness transferred over to run in two. There will just be an identical copy. If the original body dies, one of me will also die.
The causal relationship between me and myself tomorrow is not the same is the causal relationship between me and my duplicate tomorrow, because one of those is a physical object which has continuity over time and one of those is a similar physical object which was instantiated de novo in a different location when the teleporter was run.
The mind is not a program which runs on the meat computer of the brain and could in principle be transferred to a thumb drive if we worked out the format conversions; the mind is the meat of the brain.
Realistically I doubt you’d even need to be sure it works, just reasonably confident. Folks step on planes all the time and those do on rare occasion fail to deliver them intact at the other terminal.
Within this framework, whether or not you “feel that continuity” would mostly be a fact about the ontology your mindstate uses thinking about teleportation. Everything in this post could be accurate and none of it would be incompatible with you having an existential crisis upon being teleported, freaking out upon meeting yourself, etc.
Nor does anything here seem to make a value judgement about what the copy of you should do if told they’re not allowed to exist. Attempting revolution seems like a perfectly valid response; self defense is held as a fairly basic human right after all. (I’m shocked that isn’t already the plot of a sci-fi story.)
It would also be entirely possible for both of your copies to hold conviction that they’re the one true you—Their experiences from where they sit being entirely compatible with that belief. (Definitely the plot of at least one Star Trek episode.)
There’s not really any pressure currently to have thinking about mind copying that’s consistent with every piece of technology that could ever conceivably be built. There’s nothing that forces minds to have accurate beliefs about anything that won’t kill them or wouldn’t have killed their ancestors in fairly short order. Which is to say mostly that we shouldn’t expect to get accurate beliefs about weird hypotheticals often without having changed our minds at least once.
I’m still struggling with this. I’m fine with the notion that you could, in theory, teleport a copy of me across the universe and to that copy there would be a sense of continuity. But your essay didn’t convince me that the version of me entering the teleporter would feel that continuity. To make it explicit, say you get into that teleporter and due to a software bug it doesn’t “deconstruct” you up teleportation. Here you are on this end and the technician says “trust me, you were teleported”. He then explains that due to intergalactic law, two of you are not allowed to exist, so the version of you on this side of the teleporter must be euthanized. (a) would you be fine with this, since you know there is a copy of you on the other side? and (b) are you asserting that you have some sort of shared consciousness with the copy? To me it seems clear that while the copy would remember getting into the teleporter, the original version would have no notion of whether teleportation was successful or not.
The key to this koan (at least for me) is undoing the assumption that there can be only one of you. There’s one of you that steps in and one that steps out. And they’re the same you.
What I value about me is the pattern of beliefs, memories, and values. The other me has has an identical brain state, so all of those. It is simply another me. I care about the second one pretty much exactly as much as I care about the same pattern continuing in a more similar location and with more similar molecules instantiating the pattern. That’s because I care far less about where I am and which molecules I’m made of than the pattern of identity in my mind/brain.
The same as you can have two of anything else that’s close-enough-for-the-purpose. I can have two rocks if I don’t care about the difference in their molecular makeup. I can have two mes.
Yes, you have some sort of shared consciousness with the copy; it’s the same shared consciousness between the you of today and the you that wakes up tomorrow. It doesn’t imply sharing events that happen simultaneously or anything mystical about “sharing consciousness”.
That’s why I’d happily step into the destructive teleporter if I was certain the copy on the other side would have exactly my mind-pattern, including memories, beliefs, and values. That’s me.
These statements make the most sense only in the standard LW-computationalist frame, which reads to me as substantively anti-physicalist and mostly unreasonable to believe in, for reasons building off of what I sketched out in a comment to Ruby. But, in any case, I can concede it for now, if only for purposes of this conversation.
The attempted mind-reading of others is (justifiably) seen as rude in conversations over the Internet, but I must nonetheless express very serious skepticism about this claim, as it’s currently written.
For one, I do not believe that “beliefs” and “values” ultimately make sense as distinct, coherent concepts that carve reality at the joints. This topic has been talked about before on LW a number of times, but I still fully endorse Charlie Steiner’s distillation of it in his excellently-written Reducing Goodhart sequence:
I expanded upon some of these ideas in a rather long comment I wrote to Wei Dai on the question of values and the orthogonality thesis:
Moreover, you are explicitly claiming that your values are not indexical, which is rather unlikely in its own right, conflicts very strongly with my intuition (and, I would expect, with that of the vast majority of “regular”, non-rationalist people), and certainly seems to disvalue (or even completely ignore) the relevance of continuous subjective experience. Put more clearly, if I were to be in such a spot, and one of my “copies” were told to choose between being tortured or having the other copy be tortured instead, it would certainly choose the latter option, and I suspect this to be the case for ~ every other person as well (with apologies for a slight generalization from one example).
In any case, the rather abstract “beliefs, memories and values” you solely purport to value fit the category of professed ego-syntonic morals much more so than the category of what actually motivates and generates human behavior, as Steven Byrnes explained in an expectedly outstanding way:
Steve also argues, in my view correctly, that “all valence ultimately flows, directly or indirectly, from innate drives”, which are entirely centered on (indexical, selfish) subjective experience such as pain, hunger, status drive, emotions etc. I see no clear causal mechanism through which something like that could ever make a human (copy) stop valuing its qualia in favor of the abstract concepts you purport to defend.
I don’t really buy this because I am unsure how to judge or conceptualize this shared consciousness across time. To sketch out some of my thoughts further, I’ll quote another part of my response to Wei Dai:
I don’t have time to respond to all of this. I don’t disagree with any particular claim you’ve made there. I value the continuity of experience as much as you; the experience of a pattern continuing down to the most minute detail is more continuous than when we fall asleep, have some half-conscious and fully unconscious states, and wake up as an approximate but less precise continuation of the mental pattern we were when we went to sleep.
The fine distinctions in beliefs and values don’t matter. I agree with all of your statements about the vagaries and confusions about beliefs and values, but they’re not relevant here. That perfectly duplicated pattern carries all of them, stated and unstated, complex and simple. Every memory. There’s nothing else to value, except for continuity in space and time. I’d rather be me waking up in Des Moines in a month than stay where I am and get brain damage (and loss of self) in one minute.. I confess that I don’t love going to sleep, but I assume that you also don’t consider it similar to death.
You’ve got a lot of questions to raise, but no apparent alternative. Your mind is a pattern. That pattern is instantiated in matter. Reproduce the matter, you’ve reproduced the mind. That’s not anti-physicalist, it’s just how physics of information processing works. The only alternative is positing a mind-pattern that’s not tightly connected to matter—but that helps explain nothing. The physical world works just fine for instantiating the information processing you need to create a mind that is self-aware and simulates its environment like humans seem to do.
I don’t disagree with anything you’ve said; it’s just not an alternative view. You’re fighting against the counterintuitive conclusion. Sure I’d rather have a different version of me be tortured; it’s slightly different. But I won’t be happy about it. And my intuition is still drawn toward continuity being important, even though my whole rational mind disagrees. I’ve been back and forth over this extensively, and the conclusion is always the same- ever since I got over the counter-intuitive nature of the plural I.
There are two conflicting strong intuitions. One has to give. Which one seems inarguable. Continuity of matter doesn’t matter; continuity of pattern does.
Non computationalism physicalism is an alternative to either or both the computationalist theories. (That performing a certain class of computations is sufficient to be conscious in general, or that performing a specific one is sufficient to be a particular conscious individual. Computation as a theory of consciousness qua awareness isn’t known to be true, and even if it is assumed, it doesn’t directly give you a theory of personal identity).
The non existence, or incoherence, of personal identity is another. There doesn’t have to be an answer to “when is a mind me”.
Note that no one except andeslodes is arguing against copying. The issue is when a mind is me, the person typing this, not a copy-of-me.
Well, that’s only copying.
Consciousness, qua Awareness, and Personal Identity are easily confused, not least because both are often called “consciousness”.
A computational theory of consciousness is sometimes called on to solve the second problem, the problem of personal identity. But there is no strong reason to think a computational duplicate of you, actually is you, since there is no strong reason to think any other kind of duplicate is.
Qualitative identity is a relationship between two or more things that are identical in all their properties. Numerical identity is the relationship a thing has only to itself. The Olsen twins enjoy qualitative identity; Stephanie Germanota and Lady Gaga have numerical identity. The trick is to jump from qualitative identity to numerical identity, because the claim is that a computational duplicate of you, is you, the very same person.
Suppose you found out you had an identical twin. You would not consider them to be you yourself. Likewise for a biological clone. A computational duplicate would be lower resolution still, so why would it be you? The major problem is that you and your duplicate exist simultaneously in different places, which goes against the intuition that you are a unique individual.
You don’t really believe in the plural I theory, or you would have a different and we to the torture question.
Non -computationalist physicalism doesn’t have to be the claim that material continuity matters , and pattern doesnt: it can be the claim that both do. So that you cease to be you if you are destructively cloned, and also if your mind is badly scrambled. No bullet biting about plural Is is required.
If you’re not arguing against a perfect copy being you, then I don’t understand your position, so much of what follows will probably miss the mark. I had written more but have to cut myself off since this discussion is taking time without having much odds of improving anyone’s epistemics noticably.
The Olson twins are do not at all have qualitative identity. They have different minds: sets of memories, beliefs, and values. So I just don’t know what your position is. You claim that there doesn’t need to be an answer; that seems false, as you could have to make decisions informed by your belief. You currently value your future self more than other people, so you act like you believe that’s you in a functional sense.
Are you the same person tomorrow? It’s not an identical pattern, but a continuation. I’m saying it’s pretty-much you because the elements you wouldn’t want changed about yourself are there.
If you value your body or your continuity over the continuity of your memories, beliefs, values, and the rest of your mind that’s fine, but the vast majority will disagree with you on consideration. Those things are what we mean by “me”.
I certainly do believe in the plural I (under the special cirrumstance I discussed); we must be understanding something differently in the torture question. I don’t have a preference pre-copy for who gets tortured; both identical future copies are me from my perspective before copying. Maybe you’re agreeing with that?
After copying, we’re immediately starting to diverge into two variants of me, and future experiences will not be shared between them.
I was addressing a perfect computational copy. An imperfect but good computational copy is higher resolution, not lower, than a biological twin. It is orders of magnitude more similar to the pattern that makes your mind, even though it is less similar to the pattern that makes your body. What is writing your words is your mind, not your body, so when it says “I” it meets the mind.
Noncomputational physicalism sounds like it’s just confused. Physics performs computations and can’t be separated from doing that.
Dual aspect theory is incoherent because you can’t have our physics without doing computation that can create a being that claims and experiences consciousness like we do. Noncomputational physicalism sounds like the same thing.
I concede it’s possible that consciousness includes some magic nonphysical component (that’s not computation or pattern instantiated by physics as a pure result of how physics works). That could change my answer to when a mind is me. I don’t think that’s what you’re arguing for though.
I’ve got to park this here to get other things done. I’ll read any response but it might be a better use of time to restart the discussion more carefully—if you care.
I agree that this conversation, as currently started, is unlikely to lead to anything more productive. As such, I’ll keep my response here brief [1], in case you want to use it as a starting point if you ever intend for us to talk about it again.
As I read these statements, they fail to contend with a rather basic map-territory distinction that lies at the core of “physics” and “computation.”
The basic concept of computation at issue here is a feature of the map you could use to approximate reality (i.e., the territory) . It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate [2]. This is because, in this restricted and epistemically hobbled ontology, what is given inordinate attention is the abstract classical computation performed by a particular subset of the brain’s electronic circuit. This is what makes it anti-physicalist, as I have explained:
To make it even more explicit, this interpretation of the computationalist perspective (that the quantum stuff doesn’t matter etc) was confirmed as accurate by its proponents.
So when you talk about a “pattern instantiated by physics as a pure result of how physics works”, you’re not pointing to anything meaningful in the territory, rather only something that makes sense in the particular ontology you have chosen to use to view it through, a frame that I have explained my skepticism of already.
This will be my final comment in this thread, regardless of what happens.
Put differently, “computation” is not an ontologically primitive concept in reality-as-it-is, but only in mathematical approximations of it that make specific assumptions about what does and doesn’t exist. Those assumptions can be sometimes justified in terms of intuitive appeal, expediency of calculation etc, but reifying them as unchallengeable axioms of the universe rather than of your model of it is wrong.
.
Not 100% , but enough to illustrate the concept.
I didn’t have to have a solution to point out the flaws in other solutions. My main point is that a no to soul- theory isn’t a yes to computationalism. Computationalism isn’t the only alternative, or the best.
Some problems are insoluble.
My belief isn’t necessarily the actually really answer ..is it? That’s basic rationality. You need beliefs to act...but beliefs aren’t necessarily true.
And I have no practical need for a theory that can answer puzzles about destructive teleportation and the like.
Yes. That’s not an argument in favour of the contentious points, like computationalism and Plural Is. If I try to reverse the logic, and great everything I value as me, I get bizarre results...I am my dog, country, etc.
Tomorrow-me is a physical continuation , too.
If I accept that pattern is all that matters , I have to face counterintuitive consequences like Plural I’s.
If I accept that material continuity is all that matters, then I face other counterintuitive consequences, like having my connectome rewired.
Its an open philosophical problem. If there were an simple answer , it would have been answered long ago.
“Yer an algorithm, Arry” is a simple answer. Just not good
Fortunately, it’s not an either-or choice.
...and post copy I have a preference for the copy who isn’t me to be tortured. Which is to say that both copies say the same thing, which is to say that they are only copies. If they regarded themselves as numerically identical, the response “the other one!” would make no sense, and nor would the question. The questions presumes a lack of numerical identity, so how can it prove it?
You’re assuming pattern continuity matters more than material continuity. There’s no proof of that, and no proof that you have to make an either-or choice.
The abstract pattern can’t cause anything without the brain/body.
Noncomputational physicalism isn’t the claim that computation never occurs. Its the claim that the computational abstraction doesn’t capture everything that’s relevant to consciousness/mind. Its not physically necessary that the computational abstraction captures all the causally relevant information, so it isn’t logically necessary, a fortiori.
Computation is a lossy , high level abstraction of a what a physical system does. It doesn’t fundamentally cause anything in itself.
Now, you can argue that a physical duplicate would make the same claims to be conscious without actually having consciousness, and that’s literally a p-zombie argument.
But we do have consciousness. The insight of DAT is that “reports of consciousness have a physical/computational basis” isn’t exclusive of “reports of consciousness are caused by consciousness”. You can have your cake and eat it!
Of course, the above is all about consciousness-qua-awareness , not consciousness qua personal identity.
If it’s physical, why call it magical?
It’s completely standard that all computations run on a substrate. If you want to say that all physics is computation, OK, but then all computation is physics. You then no longer have plural I’s, because physics doesn’t allow the selfsame object to have multiple instances.
It’s easy to disagree if there is another explanation, which there is: a functional duplicate will behave the same, because it’s a functional duplicate..whether it’s conscious of not, whether it’s you or not.
I disagree that your mind is “a pattern instantiated in matter.” Your mind is the matter. It’s precisely the assumption that the mind is separable from the matter that I would characterize as non-physicalist.
Terminology aside, I think if you examine this carefully it’s incoherent.
Do you think a successful upload would say things like “I’m still me!” and think thoughts like “I’m so glad I payed extra to give myself cool virtual environment options”? That seems like an inevitability if the causal patterns of your mind were captured. And it would be tough to disagree with a thing claiming up and down it’s you, citing your most personal memories as evidence.
A successful upload (assuming this is physically possible, which is not a settled question) would remember my same memories and have my same personality traits; however, that would not mean my mind had been unwound from the matter and transferred to it, but rather that my mind had been duplicated in silico.
Yes, it’s a duplicate which will also be you from your current perspective. If you duplicated your car tomorrow you’d have two cars; if you duplicate your mind tomorrow you need to plan on there being two yous tomorrow.
No; it will remember my life but I will not go on to experience its experiences. (Similarly, if I duplicate my car and then destroy the original, its engine does not continue on to fire in the duplicate; the duplicate has an engine of its own, which may be physically identical but is certainly not the same object).
Okay, so would you say that the you of today goes on to experience the you-of-tomorrow’s experiences? I think the relationship is the same to a perfect duplicate. The duplicate is no less you than the you of tomorrow is. They are separate people from their perspective after duplication, but almost-the-same-person to a much greater degree than twins.
You (pre-duplication) will go on to have two separate sets of experiences. Both are you from your current perspective before duplication; you should give them equal consideration in your decisions, since the causal relationship is the same in both ways between you and the duplicate as to your self of tomorrow.
Consider the case where the duplicate is teleported to your location and vice versa during duplication. Then just location swapped while you’re asleep. And consider that you wouldn’t care a whit if every molecule of your body was Theseus-swapped one by one for identical molecules in identical locations and roles while you slept.
No; I, pre-duplication, exist in a single body, and will not post-duplication have my consciousness transferred over to run in two. There will just be an identical copy. If the original body dies, one of me will also die.
The causal relationship between me and myself tomorrow is not the same is the causal relationship between me and my duplicate tomorrow, because one of those is a physical object which has continuity over time and one of those is a similar physical object which was instantiated de novo in a different location when the teleporter was run.
The mind is not a program which runs on the meat computer of the brain and could in principle be transferred to a thumb drive if we worked out the format conversions; the mind is the meat of the brain.
Realistically I doubt you’d even need to be sure it works, just reasonably confident. Folks step on planes all the time and those do on rare occasion fail to deliver them intact at the other terminal.
Within this framework, whether or not you “feel that continuity” would mostly be a fact about the ontology your mindstate uses thinking about teleportation. Everything in this post could be accurate and none of it would be incompatible with you having an existential crisis upon being teleported, freaking out upon meeting yourself, etc.
Nor does anything here seem to make a value judgement about what the copy of you should do if told they’re not allowed to exist. Attempting revolution seems like a perfectly valid response; self defense is held as a fairly basic human right after all. (I’m shocked that isn’t already the plot of a sci-fi story.)
It would also be entirely possible for both of your copies to hold conviction that they’re the one true you—Their experiences from where they sit being entirely compatible with that belief. (Definitely the plot of at least one Star Trek episode.)
There’s not really any pressure currently to have thinking about mind copying that’s consistent with every piece of technology that could ever conceivably be built. There’s nothing that forces minds to have accurate beliefs about anything that won’t kill them or wouldn’t have killed their ancestors in fairly short order. Which is to say mostly that we shouldn’t expect to get accurate beliefs about weird hypotheticals often without having changed our minds at least once.