I find myself strongly disagreeing with what is being said in your post. Let me preface by saying that I’m mostly agnostic with respect to the possible “explanations” of consciousness etc, but I think I fall squarely within camp 2. I say mostly because I lean moderately towards physicalism.
First, an attempt to describe my model of your ontology:
You implicitly assume that consciousness / subjective experience can be reduced to a physical description of the brain, which presumably you model as a classical (as opposed to quantum) biological electronic circuit. Physically, to specify some “brain-state” (which I assume is essentially the equivalent of a “software snapshot” in a classical computer) you just need to specify a circuit connectivity for the brain, along with the currents and voltages between the various parts of the circuit (between the neurons let’s say). This would track with your mentions of reductionism and physicalism and the general “vibe” of your arguments. In this case I assume you treat conscious experience roughly as “what it feels like” to be software that is self-referential on top of taking in external stimuli from sensors. This software is instantiated on a biological classical computer instead of a silicon-based one.
With this in mind, we can revisit the teleporter scenario. Actually, let’s consider a copier instead of a teleporter, in the sense that you dont destroy the original after finishing the procedure. Then, once a copy is made, you have two physical brains that have the same connectivity, the same currents and the same voltages between all appropriate positions. Therefore, based on the above ontology, the brains are physically the same in all the ways that matter and thus the software / the experience is also the same. (Since software is just an abstract “grouping” which we use to refer to the current physical state of the hardware)
Assuming this captures your view, let me move on to my disagreements:
My first issue with your post is that this initial ontological assumption is neither mentioned explicitly nor motivated. Nothing in your post can be used as proof of this initial assumption. On the contrary, the teleporter argument, for example, becomes simply a tautology if you start from your premise—it cannot be used to convince someone that doesn’t already subscribe to your views on the topic. Even worse, it seems to me that your initial assumption forces you to contort (potential) empirical observation to your ontology, instead of doing the opposite.
To illustrate, let’s assume we have the copier—say it’s a room you walk into, you get scanned and then a copy is reconstructed in some other room far away. Since you make no mention of quantum, I guess this can be a classical copy, in the sense that it can copy essentially all of the high-level structure, but it cannot literally copy the positions of specific electrons, as this is physically impossible anyways. Nevertheless, this copier can be considered “powerful” enough to copy the connectivity of the brain and the associated currents and voltages. Now, what would be the experience of getting copied, seen from a first-person, “internal”, perspective? I am pretty sure it would be something like: you walk into the room, you sit there, you hear say the scanner working for some time, it stops, you walk out. From my agnostic perspective, if I were the one to be scanned it seems like nothing special would have happened to me in this procedure. I didnt feel anything weird, I didnt feel my “consciousness split into two” or something. Namely, if I consider this procedure as an empirical experiment, from my first person perspective I dont get any new / unexpected observation compared to say just sitting in an ordinary room. Even if I were to go and find my copy, my experience would again be like meeting a different person which just happens to look like me and which claims to have similar memories up to the point when I entered the copying room. There would be no way to verify or to view things from their first person perspective.
At this point, we can declare by fiat that me and my copy are the same person / have the same consciousness because our brains, seen as classical computers, have the same structure, but this experiment will not have provided any more evidence to me that this should be true. On the contrary, I would be wary to, say, kill myself or to be destroyed after the copying procedure, since no change will have occured to my first person perspective, and it would thus seem less likely that my “experience” would somehow survive because of my copy.
Now you can insist that philosophically it is preferable to assume that brains are classical computers etc, in order to retain physicalism which is preferable to souls and cartesian dualism and other such things. Personally, I prefer to remain undecided, especially since making the assumption brain= classical hardware, consciousness=experience as software leads to weird results. It would force me to conclude that the copy is me even though I cannot access their first person perspective (which defeats the purpose) and it would also force me to accept that even a copy where the “circuit” is made of water pipes and pumps, or gears and levers also have an actual, first person experience as “me”, as long as the appropriate computations are being carried out.
One curious case where physicalism could be saved and all these weird conclusions could be avoided would be if somehow there is some part of the brain which does something quantum, and this quantum part is the essential ingredient for having a first person experience. The essence would be that, because of the no-cloning theorem, a quantum-based consciousness would be physically impossible to copy, even in theory. This would get around all the problems which come with the copyability implicit in classical structures. The brain would then be a hybrid of classical and quantum parts, with the classical parts doing most of the work (since neural networks which can already replicate a large part of human abilities are classical) with some quantum computation mixed in, presumably offering some yet unspecified fitness advantage. Still, the consensus is that it is improbable that quantum computation is taking place in the brain, since quantum states are extremely “fragile” and would decohere extremely rapidly in the environment of the brain...
My first issue with your post is that this initial ontological assumption is neither mentioned explicitly nor motivated. Nothing in your post can be used as proof of this initial assumption.
There are always going to be many different ways someone could object to a view. If you were a Christian, you’d perhaps be objecting that the existence of incorporeal God-given Souls is the real crux of the matter, and if I were intellectually honest I’d be devoting the first half of the post to arguing against the Christian Soul.
Rather than trying to anticipate these objections, I’d rather just hear them stated out loud by their proponents and then hash them out in the comments. This also makes the post less boring for the sorts of people who are most likely to be on LW: physicalists and their ilk.
Now, what would be the experience of getting copied, seen from a first-person, “internal”, perspective? I am pretty sure it would be something like: you walk into the room, you sit there, you hear say the scanner working for some time, it stops, you walk out. From my agnostic perspective, if I were the one to be scanned it seems like nothing special would have happened to me in this procedure. I didnt feel anything weird, I didnt feel my “consciousness split into two” or something.
Why do you assume that you wouldn’t experience the copy’s version of events?
The un-copied version of you experiences walking into the room, sitting there, hearing the scanner working, and hearing it stop; then that version of you experiences walking out. It seems like nothing special happened in this procedure; this version of you doesn’t feel anything weird, and doesn’t feel like their “consciousness split into two” or anything.
The copied version of you experiences walking into the room, sitting here, hearing the scanner working, and then an instantaneous experience of (let’s say) feeling like you’ve been teleported into another room—you’re now inside the simulation. Assuming the simulation feels like a normal room, it could well seem like nothing special happened in this procedure—it may feel like blinking and seeing the room suddenly change during the blink, while you yourself remain unchanged. This version of you doesn’t necessarily feel anything weird either, and they don’t feel like their “consciousness split into two” or anything.
It’s a bit weird that there are two futures, here, but only one past—that the first part of the story is the same for both versions of you. But so it goes; that just comes with the territory of copying people.
If you disagree with anything I’ve said above, what do you disagree with? And, again, what do you mean by saying you’re “pretty sure” that you would experience the future of the non-copied version?
Namely, if I consider this procedure as an empirical experiment, from my first person perspective I dont get any new / unexpected observation compared to say just sitting in an ordinary room. Even if I were to go and find my copy, my experience would again be like meeting a different person which just happens to look like me and which claims to have similar memories up to the point when I entered the copying room. There would be no way to verify or to view things from their first person perspective.
Sure. But is any of this Bayesian evidence against the view I’ve outlined above? What would it feel like, if the copy were another version of yourself? Would you expect that you could telepathically communicate with your copy and see things from both perspectives at once, if your copies were equally “you”? If so, why?
On the contrary, I would be wary to, say, kill myself or to be destroyed after the copying procedure, since no change will have occured to my first person perspective, and it would thus seem less likely that my “experience” would somehow survive because of my copy.
Shall we make a million copies and then take a vote? :)
I agree that “I made a non-destructive software copy of myself and then experienced the future of my physical self rather than the future of my digital copy” is nonzero Bayesian evidence that physical brains have a Cartesian Soul that is responsible for the brain’s phenomenal consciousness; the Cartesian Soul hypothesis does predict that data. But the prior probability of Cartesian Souls is low enough that I don’t think it should matter.
You need some prior reason to believe in this Soul in the first place; the same as if you flipped a coin, it came up heads, and you said “aha, this is perfectly predicted by the existence of an invisible leprechaun who wanted that coin to come up heads!”. Losing a coinflip isn’t a surprising enough outcome to overcome the prior against invisible leprechauns.
and it would also force me to accept that even a copy where the “circuit” is made of water pipes and pumps, or gears and levers also have an actual, first person experience as “me”, as long as the appropriate computations are being carried out.
Why wouldn’t it? What do you have against water pipes?
First off, would you agree with my model of your beliefs? Would you consider it an accurate description?
Also, let me make clear that I don’t believe in cartesian souls. I, like you, lean towards physicalism, I just don’t commit to the explanation of consciousness based on the idea of the brain as a **classical** electronic circuit. I don’t fully dismiss it either, but I think it is worse on philosophical grounds than assuming that there is some (potentially minor) quantum effect going on inside the brain that is an integral part of the explanation for our conscious experience. However, even this doesn’t feel fully satisfying to me and this is why I say that I am agnostic. When responding to my points, you can assume that I am a physicalist, in the sense that I believe consciousness can probably be described using physical laws, with the added belief that these laws **may** not be fully understandable by humans. I mean this in the same way that a cat for example would not be able to understand the mechanism giving rise to consciousness, even if that mechanism turned out to be based on the laws of classical physics (for example if you can just explain consciousness as software running on classical hardware).
To expand upon my model of your beliefs, it seems to me that what you do is that you first reject cartesian souls and other such things on philosophical grounds and you thus favour physicalism. I agree on this. However I dont see why you are immediately assuming that physicalism means that your consciousness must be a result of classical computation. It could be the result of quantum computation. It could be something even subtler in some deeper theory of physics. At this point you may say that a quantum explanation may be more “unlikely” than a classical one, but I think that we both can agree that the “absurdity distance” between the two is much smaller than say a classical explanation and a soul-based one, and thus we now have to weigh the two much options much more carefully since we cannot dismiss one in favour of the other as easily. What I would like to argue is that a quantum-based consciousness is philosophically “nicer” than a classical one. Such an explanation does not violate physicalism, while at the same time rendering a lot of points of your post invalid.
Let’s start by examining the copier argument again but now with the assumption that conscious experience is the result of quantum effects in the brain and see where it takes us. In this case, to fully copy a consciousness from one place to another you would have to copy an unknown quantum state. This is physically impossible even in theory, based on the no-cloning theorem. Thus the “best” copier that you can have is the copier from my previous comment, which just copies the classical connectivity of the brain and all the current and voltages etc, but which now fails to copy the part that is integral to **your** first person experience. So what would be your first person experience if you were to enter the room? You would just go in, hear the scanner work, get out. You can do this again and again and again and always find yourself experiencing getting out of the same initial room. At the same time the copier does create copies of you, but they are new “entities” that share the same appearance as you and which would approximate to some (probably high) degree your external behaviour. These copies may or may not have their own first person experience (and we can debate this further) but this does not matter for our argument. Even if they have a first person experience, it would be essentially the same as the copier just creating entirely new people while leaving your first person experience unchanged. In this way, you can step into the room with zero expectation that you may walk out of a room on the other side of the copier, in the same way that you dont expect to suddenly find yourself in some random stranger’s body while going about your daily routine. Even better, this belief is nicely consistent with physicalism, while still not violating our intuitions that we have private and uncopiable subjective experiences. It also doesn’t force us to believe that a bunch of water pipes or gears functioning as a classical computer can ever have our own first person experience. Going even further, unknown quantum states may not be copyable but they are transferable (see quantum teleportation etc), meaning that while you cannot make a copier you can make a transporter, but you always have to be at only one place at each instant.
Let me emphasize again that I am not arguing **for** quantum consciousness as a solution. I am using it as an example that a “philosophically nicer” physicalist option exists compared to what I assume you are arguing for. From this perspective, I don’t see why you are so certain about the things you write in your post. In particular, you make a lot of arguments based on the properties of “physics”, which in reality are properties of classical physics together with your assumption that consciousness must be classical. When I said that I find issue with the fact that you start from an unstated assumption, I didnt expect you to argue against cartesian dualism. I expected you to start from physicalism and then motivate why you chose to only consider classical physics. Otherwise, the argumentation in your post seems lacking, even if I start from the physicalist position. To give one example of this:
You say that “there isn’t an XML tag in the brain saying `this is a new brain, not the original`” . By this I assume you mean that the physical state of the brain is fungible, it is copyable, there is nothing to serve as a label. But this is not a feature of physics in general. An unknown quantum state cannot be copied, it is not fungible. My model of what you mean: “(I assume that) first person experience can be fully attributed to some structure of the brain as a classical computer. It can be fully described by specifying the connectivity of the neurons and the magnitudes of the currents and voltages between each point. Since (I assume) consciousness physically manifests as a classical pattern and since classical patterns can be copied, then by definition there can be many copies of “the same” consciousness”. Thus, what you write about XML tags is not an argument for your position—it is not imposed to you by physics to consider a fungible substrate for consciousness - it is just a manifestation of your assumption. It’s cyclical. A lot of your arguments which invoke “physics” are like that.
Why would the laws of physics conspire to vindicate a random human intuition that arose for unrelated reasons?
We do agree that the intuition arose for unrelated reasons, right? There’s nothing in our evolutionary history, and no empirical observation, that causally connects the mechanism you’re positing and the widespread human hunch “you can’t copy me”.
If the intuition is right, we agree that it’s only right by coincidence. So why are we desperately searching for ways to try to make the intuition right?
It also doesn’t force us to believe that a bunch of water pipes or gears functioning as a classical computer can ever have our own first person experience.
Why is this an advantage of a theory? Are you under the misapprehension that “hypothesis H allows humans to hold on to assumption A” is a Bayesian update in favor of H even when we already know that humans had no reason to believe A? This is another case where your theory seems to require that we only be coincidentally correct about A (“sufficiently complex arrangements of water pipes can’t ever be conscious”), if we’re correct about A at all.
One way to rescue this argument is by adding in an anthropic claim, like: “If water pipes could be conscious, then nearly all conscious minds would be instantiated in random dust clouds and the like, not in biological brains. So given that we’re not Boltzmann brains briefly coalescing from space dust, we should update that giant clouds of space dust can’t be conscious.”
But is this argument actually correct? There’s an awful lot of complex machinery in a human brain. (And the same anthropic argument seems to suggest that some of the human-specific machinery is essential, else we’d expect to be some far-more-numerous observer, like an insect.) Is it actually that common for a random brew of space dust to coalesce into exactly the right shape, even briefly?
You’re missing the bigger picture and pattern-matching in the wrong direction. I am not saying the above because I have a need to preserve my “soul” due to misguided intuitions. On the contrary, the reason for my disagreement is that I believe you are not staring into the abyss of physicalism hard enough. When I said I’m agnostic in my previous comment, I said it because physics and empiricism lead me to consider reality as more “unfamiliar” than you do (assuming that my model of your beliefs is accurate). From my perspective, your post and your conclusions are written with an unwarranted degree of certainty, because imo your conception of physics and physicalism is too limited. Your post makes it seem like your conclusions are obvious because “physics” makes them the only option, but they are actually a product of implicit and unacknowledged philosophical assumptions, which (imo) you inherited from intuitions based on classical physics. By this I mean the following:
It seems to me that when you think about physics, you are modeling reality (I intentionally avoid the word “universe” because it evokes specific mental imagery) as a “scene” with “things” in it. You mentally take the vantage point of a disembodied “observer/narrator/third person” observing the “things” (atoms, radiation etc) moving, interacting according to specific rules and coming together to create forms. However, you have to keep in mind that this conception of reality as a classical “scene” that is “out there” is first and foremost a model, one that is formed from your experiences obtained by interacting specifically with classical objects (biliard balls, chairs, water waves etc). You can extrapolate from this model and say that reality truly is like that, but the map is not the territory, so you at least have to keep track of this philosophical assumption. And it is an assumption, because “physics” doesn’t force you to conclude such a thing. Seen through a cautious, empirical lens, physics is a set of rules that allows you to predict experiences. This set of rules is produced exclusively by distilling and extrapolating from first-person experiences. It could be (and it probably is) the case that reality is ontologically far weirder than we can conceive, but that it still leads to the observed first-person experiences. In this case, physics works fine to predict said experiences, and it also works as an approximation of reality, but this doesn’t automatically mean that our (merely human) conceptual models are reality. So, if we want to be epistemically careful, we shouldn’t think “An apple is falling” but instead “I am having the experience of seeing an apple fall”, and we can add extra philosophical assumptions afterwards. This may seem like I am philosophizing too much and being too strict, but it is extremely important to properly acknowledge subjective experience as the basis for our mental models, including that of the observer-independent world of classical physics. This is why the hard problem of consciousness is called “hard”. And if you think that it should “obviously” be the other way around, meaning that this “scene” mental model is more fundamental than your subjective experiences, maybe you should reflect on why you developed this intuition in the first place. (It may be through extrapolating too much from your (first-person, subjective) experiences with objects that seemingly possess intrinsic, observer-independent properties, like the classical objects of everyday life.)
At this point it should be clearer why I am disagreeing with your post. Consciousness may be classical, it may be quantum, it may be something else. I have no issue with not having a soul and I don’t object to the idea of a bunch of gears and levers instantiating my consciousness merely because I find it a priori “preposterous” or “absurd” (though it is not a strong point of your theory). My issue is not with your conclusion, it’s precisely with your absolute certainty, which imo you support with cyclical argumentation based on weak premises. And I find it confusing that your post is receiving so much positive attention on a forum where epistemic hygiene is supposedly of paramount importance.
So in reading your comments of this post, I feel like I am reading comments made by a clone of my own mind. Though you articulate my views better than I can. This particular comment you make, I don’t think it gets The attention it deserves. It was pretty revolutionary for myself when I learned to think of almost every worldview at a model of reality. It’s most revolutionary when one realizes what is arguably an outdated Newtonian view to fall into this category of model. It really highlights that actual reality is at the least very hard to get at. This is a severe an issue with regards to consciousness.
It may be through extrapolating too much from your (first-person, subjective) experiences with objects that seemingly possess intrinsic, observer-independent properties, like the classical objects of everyday life.
Are you trying to say that quantum physics provides evidence that physical reality is subjective, with conscious observers having a fundamental role? Rob implicitly assumes the position advocated by The Quantum Physics Sequence, which argues that reality exists independently of observers and that quantum stuff doesn’t suggest otherwise. It’s just one of the many presuppositions he makes that’s commonly shared on here. If that’s your main objection, you should make that clear.
I would say that it is irrelevant for the points the post/Rob is trying to make whether consciousness is classical or quantum, given that conscious experience has, AFAIK, never been reported to be ‘quantum’ (i.e. that we don’t seem to experience superpositions or entanglement) and that we already have straightforward classical examples of lack of conscious continuity (namely: sleeping).
In the case of sleeping and waking up it is already clear that the currently awake consciousness is modeling its relation to past consciousnesses in that body through memories alone. Even without teleporters, copiers, or other universes coming into play, this connection is very fragile. How sure can a consciousness be that it is the same as the day before or as one during lucid parts of dreams? If you add brain malfunctions such as psychoses or dissociative drugs such as ketamine to the mix, the illusion of conscious continuity can disappear completely quite easily.
I like to word it like this: A consciousness only ever experiences what the brain that produces it can physically sense or synthesize.
With that as a starting point, modeling what will happen in the various thought experiments and analyses of conscious experience becomes something like this: “Given that there is a brain there, it will produce a consciousness, which will remember what is encoded in the structure of that brain and which will experience what that brain senses and synthesizes in that moment.”
There is no assumption that consciousness is classical in that, I believe. There is also no assumption of continuity in that, which I think is important as in my opinion that assumption is quite shaky and misdirects many discussions on consciousness. I’d say that the value in the post is in challenging that assumption.
In the case of sleeping and waking up it is already clear that the currently awake consciousness is modeling its relation to past consciousnesses in that body through memories alone.
The currently awake consciousness is located in the brain, which has physical continuity with its previous states. You don’t wake up as a different person because “you” are the brain (possibly also the rest of the body depending how it affects cognition but IDK) and the brain does not cease to function when you fall asleep.
I agree on the physical continuity of the brain, but I don’t think this transfers to continuity of the consciousness or its experience. It is defining “you” as that physical brain, rather than the conscious experience itself. It’s like saying that two waves are the same because they are produced by the same body of water.
Imagine significant modifications to your brain while you are asleep in such a way that your memories are vastly different, so much as to represent another person. Would the consciousness that is created on waking up experience a connection to the consciousness that that brain produced the day(s) before or to the manufactured identity?
Even you, now, without modifications, can’t say with certainty that your ‘yesterday self’ was experienced by the same consciousness as you are now (in the sense of identity of the conscious experience). It feels that way as you have memories of those experiences, but it may have been experienced by ‘someone else’ entirely. You have no way of discerning that difference (nor does anyone else).
The conscious experience is not extricable from the physical brain; it has your personality because the personality that you are is the sum total of everything in your brain. The identity comes from the brain; if it were somehow possible to separate consciousness from the rest of the mind, that consciousness wouldn’t still be you, because you’re the entire mind.
I would… not consider the sort of brain modification you’re describing to preserve physical continuity in the relevant sense? It sounds like it would, to create the described effects, involve significant alterations in portions of the brain wherein (so to speak) your identity is stored, which is not what normally happens when people sleep.
I think we are in agreement that the consciousness is tied to the brain. Claiming equivalency is not warranted, though: The brain of a dead person (very probably, I’m sure you’d agree) contains no consciousness. Let’s not dwell on this, though: I am definitely not claiming that consciousness exists outside of the brain, just that asserting physical continuity of the brain is not enough by itself to show continuity of conscious experience.
With regard to the modifications: Your line of reasoning runs into the classic issues of philosophical identity, as shown by the Ship of Theseus thought experiment or simpler yet, the Sorites paradox. We can hypothesize every amount of alterations from just modifying one atom to replacing the entire brain. Given your position, you’d be forced to choose an arbitrary amount of modifications that breaks the continuity and somehow changes consciousness A-modified-somewhat into consciousness B (or stated otherwise: from ‘you waking up a somewhat changed person’ to ‘someone else waking up in your body’).
Approaching conscious experience without the assumption of continuity but from the moment it exists in does not run into this problem.
(Assuming a frame of materialism, physicalism, empiricism throughout even if not explicitly stated)
Some of your scenarios that you’re describing as objectionable would reasonably be described as emulation in an environment that you would probably find disagreeable even within the framework of this post. Being emulated by a contraption of pipes and valves that’s worse in every way than my current wetware is, yeah, disagreeable even if it’s kinda me. Making my hardware less reliable is bad. Making me think slower is bad. Making it easier for others to tamper with my sensors is bad. All of these things are bad even if the computation faithfully represents me otherwise.
I’m mostly in the same camp as Rob here, but there’s plenty left to worry about in these scenarios even if you don’t think brain-quantum-special-sauce (or even weirder new physics) is going to make people-copying fundamentally impossible. Being an upload of you that now needs to worry about being paused at any time or having false sensory input supplied is objectively a worse position to be in in.
The evidence does seem to lean in the direction that non-classical effects in the brain are unlikely, neurons are just too big for quantum effects between neurons, and even if there were quantum effects within neurons, it’s hard to imagine them being stable for even as long as a single train of thought. The copy losing their train of thought and having momentary confusion doesn’t seem to reach the bar where they don’t count as the same person? And yet weirder new physics mostly requires experiments we haven’t thought to do yet, or experiments is regimes we’ve not yet been able to test. Whereas the behavior of things at STP in water is about as central to things-Science-has-pinned-down as you’re going to get.
You seem to hold that the universe maybe still has a lot of important surprises in store, even within the central subject matter of century old fields? Do you have any kind of intuition pump for that feeling there’s still that many earth-shattering surprises left (while simultaneously holding empiricism and science mostly work)? My sense of where there’s likely to be surprises left is not quite so expansive and this sounds like a crux for a lot of people. Even as much of a shock as qm was to physics, it didn’t invalidate much if any theory except in directly adjacent fields like chemistry and optics. And working out the finer points had progressively more narrower and shorter reaching impact. I can’t think of examples of surprises with a larger blast radius within the history of vaguely modern science. Findings of odd as yet unexplained effects pretty consistently precedes attempts at theory. Empirically determined rules don’t start working any worse when we realize the explanation given with them was wrong.
Keep in mind that society holds that you’re still you even after a non-trivial amount of head trauma. So whatever amount of imperfection in copying your unknown-unknowns cause, it’d have to both be something we’ve never noticed before in a highly studied area, and something more disruptive than getting clocked in the jaw, which seems a tall order.
Keep in mind also that the description(s) of computation that computer science has worked out is extremely broad and far from limited to just electronic circuits. Electronics are pervasive because we have as a society sunk the world GDP (possibly several times over) into figuring out how to make them cheaply at scale. Capital investment is the only thing special about computers realized in silicon. Computer science makes no such distinction. The notion of computation is so broad that there’s little if any room to conceive of an agent that’s doing something that can’t be described as computation. Likewise the equivalence proofs are quite broad; it can arbitrarily expensive to translate across architectures, but within each class of computers, computation is computation, and that emulation is possible has proofs.
All of your examples are doing that thing where you have a privileged observer position separate and apart from anything that could be seeing or thinking within the experiment. You-the-thinker can’t simply step into the thought experiment. You-the-thinker can of course decide where to attach the camera by fiat, but that doesn’t tell us anything about the experiment, just about you and what you find intuitive.
Suppose for sake of argument your unknown unknowns mean your copy wakes up with a splitting headache and amnesia for the previous ~12 hours as if waking up from surgery. They otherwise remember everything else you remember and share your personality such that no one could notice a difference (we are positing a copy machine that more or less works). If they’re not you they have no idea who else they could be, considering they only remember being you.
The above doesn’t change much for me, and I don’t think I’d concede much more without saying you’re positing a machine that just doesn’t work very well. It’s easy for me to imagine it never being practical to copy or upload a mind, or having modest imperfections or minor differences in experience, especially at any kind of scale. Or simply being something society at large is never comfortable pursuing. It’s a lot harder to imagine it being impossible even in principle with what we already know, or can already rule out with fairly high likelihood. I don’t think most of the philosophy changes all that much if you consider merely very good copying (your friends and family can’t tell the difference; knows everything you know) vs perfect copying.
The most bullish folks on LLMs seem to think we’re going to be able to make copies good enough to be useful to businesses just off all your communications. I’m not nearly so impressed with the capabilities I’ve seen to date and it’s probably just hype. But we are already getting into an uncanny valley with the (very) low fidelity copies current AI tech can spit out—which is to say they’re already treading on the outer edge of peoples’ sense of self.
I find myself strongly disagreeing with what is being said in your post. Let me preface by saying that I’m mostly agnostic with respect to the possible “explanations” of consciousness etc, but I think I fall squarely within camp 2. I say mostly because I lean moderately towards physicalism.
First, an attempt to describe my model of your ontology:
You implicitly assume that consciousness / subjective experience can be reduced to a physical description of the brain, which presumably you model as a classical (as opposed to quantum) biological electronic circuit. Physically, to specify some “brain-state” (which I assume is essentially the equivalent of a “software snapshot” in a classical computer) you just need to specify a circuit connectivity for the brain, along with the currents and voltages between the various parts of the circuit (between the neurons let’s say). This would track with your mentions of reductionism and physicalism and the general “vibe” of your arguments. In this case I assume you treat conscious experience roughly as “what it feels like” to be software that is self-referential on top of taking in external stimuli from sensors. This software is instantiated on a biological classical computer instead of a silicon-based one.
With this in mind, we can revisit the teleporter scenario. Actually, let’s consider a copier instead of a teleporter, in the sense that you dont destroy the original after finishing the procedure. Then, once a copy is made, you have two physical brains that have the same connectivity, the same currents and the same voltages between all appropriate positions. Therefore, based on the above ontology, the brains are physically the same in all the ways that matter and thus the software / the experience is also the same. (Since software is just an abstract “grouping” which we use to refer to the current physical state of the hardware)
Assuming this captures your view, let me move on to my disagreements:
My first issue with your post is that this initial ontological assumption is neither mentioned explicitly nor motivated. Nothing in your post can be used as proof of this initial assumption. On the contrary, the teleporter argument, for example, becomes simply a tautology if you start from your premise—it cannot be used to convince someone that doesn’t already subscribe to your views on the topic. Even worse, it seems to me that your initial assumption forces you to contort (potential) empirical observation to your ontology, instead of doing the opposite.
To illustrate, let’s assume we have the copier—say it’s a room you walk into, you get scanned and then a copy is reconstructed in some other room far away. Since you make no mention of quantum, I guess this can be a classical copy, in the sense that it can copy essentially all of the high-level structure, but it cannot literally copy the positions of specific electrons, as this is physically impossible anyways. Nevertheless, this copier can be considered “powerful” enough to copy the connectivity of the brain and the associated currents and voltages. Now, what would be the experience of getting copied, seen from a first-person, “internal”, perspective? I am pretty sure it would be something like: you walk into the room, you sit there, you hear say the scanner working for some time, it stops, you walk out. From my agnostic perspective, if I were the one to be scanned it seems like nothing special would have happened to me in this procedure. I didnt feel anything weird, I didnt feel my “consciousness split into two” or something. Namely, if I consider this procedure as an empirical experiment, from my first person perspective I dont get any new / unexpected observation compared to say just sitting in an ordinary room. Even if I were to go and find my copy, my experience would again be like meeting a different person which just happens to look like me and which claims to have similar memories up to the point when I entered the copying room. There would be no way to verify or to view things from their first person perspective.
At this point, we can declare by fiat that me and my copy are the same person / have the same consciousness because our brains, seen as classical computers, have the same structure, but this experiment will not have provided any more evidence to me that this should be true. On the contrary, I would be wary to, say, kill myself or to be destroyed after the copying procedure, since no change will have occured to my first person perspective, and it would thus seem less likely that my “experience” would somehow survive because of my copy.
Now you can insist that philosophically it is preferable to assume that brains are classical computers etc, in order to retain physicalism which is preferable to souls and cartesian dualism and other such things. Personally, I prefer to remain undecided, especially since making the assumption brain= classical hardware, consciousness=experience as software leads to weird results. It would force me to conclude that the copy is me even though I cannot access their first person perspective (which defeats the purpose) and it would also force me to accept that even a copy where the “circuit” is made of water pipes and pumps, or gears and levers also have an actual, first person experience as “me”, as long as the appropriate computations are being carried out.
One curious case where physicalism could be saved and all these weird conclusions could be avoided would be if somehow there is some part of the brain which does something quantum, and this quantum part is the essential ingredient for having a first person experience. The essence would be that, because of the no-cloning theorem, a quantum-based consciousness would be physically impossible to copy, even in theory. This would get around all the problems which come with the copyability implicit in classical structures. The brain would then be a hybrid of classical and quantum parts, with the classical parts doing most of the work (since neural networks which can already replicate a large part of human abilities are classical) with some quantum computation mixed in, presumably offering some yet unspecified fitness advantage. Still, the consensus is that it is improbable that quantum computation is taking place in the brain, since quantum states are extremely “fragile” and would decohere extremely rapidly in the environment of the brain...
There are always going to be many different ways someone could object to a view. If you were a Christian, you’d perhaps be objecting that the existence of incorporeal God-given Souls is the real crux of the matter, and if I were intellectually honest I’d be devoting the first half of the post to arguing against the Christian Soul.
Rather than trying to anticipate these objections, I’d rather just hear them stated out loud by their proponents and then hash them out in the comments. This also makes the post less boring for the sorts of people who are most likely to be on LW: physicalists and their ilk.
Why do you assume that you wouldn’t experience the copy’s version of events?
The un-copied version of you experiences walking into the room, sitting there, hearing the scanner working, and hearing it stop; then that version of you experiences walking out. It seems like nothing special happened in this procedure; this version of you doesn’t feel anything weird, and doesn’t feel like their “consciousness split into two” or anything.
The copied version of you experiences walking into the room, sitting here, hearing the scanner working, and then an instantaneous experience of (let’s say) feeling like you’ve been teleported into another room—you’re now inside the simulation. Assuming the simulation feels like a normal room, it could well seem like nothing special happened in this procedure—it may feel like blinking and seeing the room suddenly change during the blink, while you yourself remain unchanged. This version of you doesn’t necessarily feel anything weird either, and they don’t feel like their “consciousness split into two” or anything.
It’s a bit weird that there are two futures, here, but only one past—that the first part of the story is the same for both versions of you. But so it goes; that just comes with the territory of copying people.
If you disagree with anything I’ve said above, what do you disagree with? And, again, what do you mean by saying you’re “pretty sure” that you would experience the future of the non-copied version?
Sure. But is any of this Bayesian evidence against the view I’ve outlined above? What would it feel like, if the copy were another version of yourself? Would you expect that you could telepathically communicate with your copy and see things from both perspectives at once, if your copies were equally “you”? If so, why?
Shall we make a million copies and then take a vote? :)
I agree that “I made a non-destructive software copy of myself and then experienced the future of my physical self rather than the future of my digital copy” is nonzero Bayesian evidence that physical brains have a Cartesian Soul that is responsible for the brain’s phenomenal consciousness; the Cartesian Soul hypothesis does predict that data. But the prior probability of Cartesian Souls is low enough that I don’t think it should matter.
You need some prior reason to believe in this Soul in the first place; the same as if you flipped a coin, it came up heads, and you said “aha, this is perfectly predicted by the existence of an invisible leprechaun who wanted that coin to come up heads!”. Losing a coinflip isn’t a surprising enough outcome to overcome the prior against invisible leprechauns.
Why wouldn’t it? What do you have against water pipes?
First off, would you agree with my model of your beliefs? Would you consider it an accurate description?
Also, let me make clear that I don’t believe in cartesian souls. I, like you, lean towards physicalism, I just don’t commit to the explanation of consciousness based on the idea of the brain as a **classical** electronic circuit. I don’t fully dismiss it either, but I think it is worse on philosophical grounds than assuming that there is some (potentially minor) quantum effect going on inside the brain that is an integral part of the explanation for our conscious experience. However, even this doesn’t feel fully satisfying to me and this is why I say that I am agnostic. When responding to my points, you can assume that I am a physicalist, in the sense that I believe consciousness can probably be described using physical laws, with the added belief that these laws **may** not be fully understandable by humans. I mean this in the same way that a cat for example would not be able to understand the mechanism giving rise to consciousness, even if that mechanism turned out to be based on the laws of classical physics (for example if you can just explain consciousness as software running on classical hardware).
To expand upon my model of your beliefs, it seems to me that what you do is that you first reject cartesian souls and other such things on philosophical grounds and you thus favour physicalism. I agree on this. However I dont see why you are immediately assuming that physicalism means that your consciousness must be a result of classical computation. It could be the result of quantum computation. It could be something even subtler in some deeper theory of physics. At this point you may say that a quantum explanation may be more “unlikely” than a classical one, but I think that we both can agree that the “absurdity distance” between the two is much smaller than say a classical explanation and a soul-based one, and thus we now have to weigh the two much options much more carefully since we cannot dismiss one in favour of the other as easily. What I would like to argue is that a quantum-based consciousness is philosophically “nicer” than a classical one. Such an explanation does not violate physicalism, while at the same time rendering a lot of points of your post invalid.
Let’s start by examining the copier argument again but now with the assumption that conscious experience is the result of quantum effects in the brain and see where it takes us. In this case, to fully copy a consciousness from one place to another you would have to copy an unknown quantum state. This is physically impossible even in theory, based on the no-cloning theorem. Thus the “best” copier that you can have is the copier from my previous comment, which just copies the classical connectivity of the brain and all the current and voltages etc, but which now fails to copy the part that is integral to **your** first person experience. So what would be your first person experience if you were to enter the room? You would just go in, hear the scanner work, get out. You can do this again and again and again and always find yourself experiencing getting out of the same initial room. At the same time the copier does create copies of you, but they are new “entities” that share the same appearance as you and which would approximate to some (probably high) degree your external behaviour. These copies may or may not have their own first person experience (and we can debate this further) but this does not matter for our argument. Even if they have a first person experience, it would be essentially the same as the copier just creating entirely new people while leaving your first person experience unchanged. In this way, you can step into the room with zero expectation that you may walk out of a room on the other side of the copier, in the same way that you dont expect to suddenly find yourself in some random stranger’s body while going about your daily routine. Even better, this belief is nicely consistent with physicalism, while still not violating our intuitions that we have private and uncopiable subjective experiences. It also doesn’t force us to believe that a bunch of water pipes or gears functioning as a classical computer can ever have our own first person experience. Going even further, unknown quantum states may not be copyable but they are transferable (see quantum teleportation etc), meaning that while you cannot make a copier you can make a transporter, but you always have to be at only one place at each instant.
Let me emphasize again that I am not arguing **for** quantum consciousness as a solution. I am using it as an example that a “philosophically nicer” physicalist option exists compared to what I assume you are arguing for. From this perspective, I don’t see why you are so certain about the things you write in your post. In particular, you make a lot of arguments based on the properties of “physics”, which in reality are properties of classical physics together with your assumption that consciousness must be classical. When I said that I find issue with the fact that you start from an unstated assumption, I didnt expect you to argue against cartesian dualism. I expected you to start from physicalism and then motivate why you chose to only consider classical physics. Otherwise, the argumentation in your post seems lacking, even if I start from the physicalist position. To give one example of this:
You say that “there isn’t an XML tag in the brain saying `this is a new brain, not the original`” . By this I assume you mean that the physical state of the brain is fungible, it is copyable, there is nothing to serve as a label. But this is not a feature of physics in general. An unknown quantum state cannot be copied, it is not fungible. My model of what you mean: “(I assume that) first person experience can be fully attributed to some structure of the brain as a classical computer. It can be fully described by specifying the connectivity of the neurons and the magnitudes of the currents and voltages between each point. Since (I assume) consciousness physically manifests as a classical pattern and since classical patterns can be copied, then by definition there can be many copies of “the same” consciousness”. Thus, what you write about XML tags is not an argument for your position—it is not imposed to you by physics to consider a fungible substrate for consciousness - it is just a manifestation of your assumption. It’s cyclical. A lot of your arguments which invoke “physics” are like that.
Why would the laws of physics conspire to vindicate a random human intuition that arose for unrelated reasons?
We do agree that the intuition arose for unrelated reasons, right? There’s nothing in our evolutionary history, and no empirical observation, that causally connects the mechanism you’re positing and the widespread human hunch “you can’t copy me”.
If the intuition is right, we agree that it’s only right by coincidence. So why are we desperately searching for ways to try to make the intuition right?
Why is this an advantage of a theory? Are you under the misapprehension that “hypothesis H allows humans to hold on to assumption A” is a Bayesian update in favor of H even when we already know that humans had no reason to believe A? This is another case where your theory seems to require that we only be coincidentally correct about A (“sufficiently complex arrangements of water pipes can’t ever be conscious”), if we’re correct about A at all.
One way to rescue this argument is by adding in an anthropic claim, like: “If water pipes could be conscious, then nearly all conscious minds would be instantiated in random dust clouds and the like, not in biological brains. So given that we’re not Boltzmann brains briefly coalescing from space dust, we should update that giant clouds of space dust can’t be conscious.”
But is this argument actually correct? There’s an awful lot of complex machinery in a human brain. (And the same anthropic argument seems to suggest that some of the human-specific machinery is essential, else we’d expect to be some far-more-numerous observer, like an insect.) Is it actually that common for a random brew of space dust to coalesce into exactly the right shape, even briefly?
You’re missing the bigger picture and pattern-matching in the wrong direction. I am not saying the above because I have a need to preserve my “soul” due to misguided intuitions. On the contrary, the reason for my disagreement is that I believe you are not staring into the abyss of physicalism hard enough. When I said I’m agnostic in my previous comment, I said it because physics and empiricism lead me to consider reality as more “unfamiliar” than you do (assuming that my model of your beliefs is accurate). From my perspective, your post and your conclusions are written with an unwarranted degree of certainty, because imo your conception of physics and physicalism is too limited. Your post makes it seem like your conclusions are obvious because “physics” makes them the only option, but they are actually a product of implicit and unacknowledged philosophical assumptions, which (imo) you inherited from intuitions based on classical physics. By this I mean the following:
It seems to me that when you think about physics, you are modeling reality (I intentionally avoid the word “universe” because it evokes specific mental imagery) as a “scene” with “things” in it. You mentally take the vantage point of a disembodied “observer/narrator/third person” observing the “things” (atoms, radiation etc) moving, interacting according to specific rules and coming together to create forms. However, you have to keep in mind that this conception of reality as a classical “scene” that is “out there” is first and foremost a model, one that is formed from your experiences obtained by interacting specifically with classical objects (biliard balls, chairs, water waves etc). You can extrapolate from this model and say that reality truly is like that, but the map is not the territory, so you at least have to keep track of this philosophical assumption. And it is an assumption, because “physics” doesn’t force you to conclude such a thing. Seen through a cautious, empirical lens, physics is a set of rules that allows you to predict experiences. This set of rules is produced exclusively by distilling and extrapolating from first-person experiences. It could be (and it probably is) the case that reality is ontologically far weirder than we can conceive, but that it still leads to the observed first-person experiences. In this case, physics works fine to predict said experiences, and it also works as an approximation of reality, but this doesn’t automatically mean that our (merely human) conceptual models are reality. So, if we want to be epistemically careful, we shouldn’t think “An apple is falling” but instead “I am having the experience of seeing an apple fall”, and we can add extra philosophical assumptions afterwards. This may seem like I am philosophizing too much and being too strict, but it is extremely important to properly acknowledge subjective experience as the basis for our mental models, including that of the observer-independent world of classical physics. This is why the hard problem of consciousness is called “hard”. And if you think that it should “obviously” be the other way around, meaning that this “scene” mental model is more fundamental than your subjective experiences, maybe you should reflect on why you developed this intuition in the first place. (It may be through extrapolating too much from your (first-person, subjective) experiences with objects that seemingly possess intrinsic, observer-independent properties, like the classical objects of everyday life.)
At this point it should be clearer why I am disagreeing with your post. Consciousness may be classical, it may be quantum, it may be something else. I have no issue with not having a soul and I don’t object to the idea of a bunch of gears and levers instantiating my consciousness merely because I find it a priori “preposterous” or “absurd” (though it is not a strong point of your theory). My issue is not with your conclusion, it’s precisely with your absolute certainty, which imo you support with cyclical argumentation based on weak premises. And I find it confusing that your post is receiving so much positive attention on a forum where epistemic hygiene is supposedly of paramount importance.
So in reading your comments of this post, I feel like I am reading comments made by a clone of my own mind. Though you articulate my views better than I can. This particular comment you make, I don’t think it gets The attention it deserves. It was pretty revolutionary for myself when I learned to think of almost every worldview at a model of reality. It’s most revolutionary when one realizes what is arguably an outdated Newtonian view to fall into this category of model. It really highlights that actual reality is at the least very hard to get at. This is a severe an issue with regards to consciousness.
Are you trying to say that quantum physics provides evidence that physical reality is subjective, with conscious observers having a fundamental role? Rob implicitly assumes the position advocated by The Quantum Physics Sequence, which argues that reality exists independently of observers and that quantum stuff doesn’t suggest otherwise. It’s just one of the many presuppositions he makes that’s commonly shared on here. If that’s your main objection, you should make that clear.
I would say that it is irrelevant for the points the post/Rob is trying to make whether consciousness is classical or quantum, given that conscious experience has, AFAIK, never been reported to be ‘quantum’ (i.e. that we don’t seem to experience superpositions or entanglement) and that we already have straightforward classical examples of lack of conscious continuity (namely: sleeping).
In the case of sleeping and waking up it is already clear that the currently awake consciousness is modeling its relation to past consciousnesses in that body through memories alone. Even without teleporters, copiers, or other universes coming into play, this connection is very fragile. How sure can a consciousness be that it is the same as the day before or as one during lucid parts of dreams? If you add brain malfunctions such as psychoses or dissociative drugs such as ketamine to the mix, the illusion of conscious continuity can disappear completely quite easily.
I like to word it like this: A consciousness only ever experiences what the brain that produces it can physically sense or synthesize.
With that as a starting point, modeling what will happen in the various thought experiments and analyses of conscious experience becomes something like this: “Given that there is a brain there, it will produce a consciousness, which will remember what is encoded in the structure of that brain and which will experience what that brain senses and synthesizes in that moment.”
There is no assumption that consciousness is classical in that, I believe. There is also no assumption of continuity in that, which I think is important as in my opinion that assumption is quite shaky and misdirects many discussions on consciousness. I’d say that the value in the post is in challenging that assumption.
The currently awake consciousness is located in the brain, which has physical continuity with its previous states. You don’t wake up as a different person because “you” are the brain (possibly also the rest of the body depending how it affects cognition but IDK) and the brain does not cease to function when you fall asleep.
I agree on the physical continuity of the brain, but I don’t think this transfers to continuity of the consciousness or its experience. It is defining “you” as that physical brain, rather than the conscious experience itself. It’s like saying that two waves are the same because they are produced by the same body of water.
Imagine significant modifications to your brain while you are asleep in such a way that your memories are vastly different, so much as to represent another person. Would the consciousness that is created on waking up experience a connection to the consciousness that that brain produced the day(s) before or to the manufactured identity?
Even you, now, without modifications, can’t say with certainty that your ‘yesterday self’ was experienced by the same consciousness as you are now (in the sense of identity of the conscious experience). It feels that way as you have memories of those experiences, but it may have been experienced by ‘someone else’ entirely. You have no way of discerning that difference (nor does anyone else).
The conscious experience is not extricable from the physical brain; it has your personality because the personality that you are is the sum total of everything in your brain. The identity comes from the brain; if it were somehow possible to separate consciousness from the rest of the mind, that consciousness wouldn’t still be you, because you’re the entire mind.
I would… not consider the sort of brain modification you’re describing to preserve physical continuity in the relevant sense? It sounds like it would, to create the described effects, involve significant alterations in portions of the brain wherein (so to speak) your identity is stored, which is not what normally happens when people sleep.
I think we are in agreement that the consciousness is tied to the brain. Claiming equivalency is not warranted, though: The brain of a dead person (very probably, I’m sure you’d agree) contains no consciousness. Let’s not dwell on this, though: I am definitely not claiming that consciousness exists outside of the brain, just that asserting physical continuity of the brain is not enough by itself to show continuity of conscious experience.
With regard to the modifications: Your line of reasoning runs into the classic issues of philosophical identity, as shown by the Ship of Theseus thought experiment or simpler yet, the Sorites paradox. We can hypothesize every amount of alterations from just modifying one atom to replacing the entire brain. Given your position, you’d be forced to choose an arbitrary amount of modifications that breaks the continuity and somehow changes consciousness A-modified-somewhat into consciousness B (or stated otherwise: from ‘you waking up a somewhat changed person’ to ‘someone else waking up in your body’).
Approaching conscious experience without the assumption of continuity but from the moment it exists in does not run into this problem.
(Assuming a frame of materialism, physicalism, empiricism throughout even if not explicitly stated)
Some of your scenarios that you’re describing as objectionable would reasonably be described as emulation in an environment that you would probably find disagreeable even within the framework of this post. Being emulated by a contraption of pipes and valves that’s worse in every way than my current wetware is, yeah, disagreeable even if it’s kinda me. Making my hardware less reliable is bad. Making me think slower is bad. Making it easier for others to tamper with my sensors is bad. All of these things are bad even if the computation faithfully represents me otherwise.
I’m mostly in the same camp as Rob here, but there’s plenty left to worry about in these scenarios even if you don’t think brain-quantum-special-sauce (or even weirder new physics) is going to make people-copying fundamentally impossible. Being an upload of you that now needs to worry about being paused at any time or having false sensory input supplied is objectively a worse position to be in in.
The evidence does seem to lean in the direction that non-classical effects in the brain are unlikely, neurons are just too big for quantum effects between neurons, and even if there were quantum effects within neurons, it’s hard to imagine them being stable for even as long as a single train of thought. The copy losing their train of thought and having momentary confusion doesn’t seem to reach the bar where they don’t count as the same person? And yet weirder new physics mostly requires experiments we haven’t thought to do yet, or experiments is regimes we’ve not yet been able to test. Whereas the behavior of things at STP in water is about as central to things-Science-has-pinned-down as you’re going to get.
You seem to hold that the universe maybe still has a lot of important surprises in store, even within the central subject matter of century old fields? Do you have any kind of intuition pump for that feeling there’s still that many earth-shattering surprises left (while simultaneously holding empiricism and science mostly work)? My sense of where there’s likely to be surprises left is not quite so expansive and this sounds like a crux for a lot of people. Even as much of a shock as qm was to physics, it didn’t invalidate much if any theory except in directly adjacent fields like chemistry and optics. And working out the finer points had progressively more narrower and shorter reaching impact. I can’t think of examples of surprises with a larger blast radius within the history of vaguely modern science. Findings of odd as yet unexplained effects pretty consistently precedes attempts at theory. Empirically determined rules don’t start working any worse when we realize the explanation given with them was wrong.
Keep in mind that society holds that you’re still you even after a non-trivial amount of head trauma. So whatever amount of imperfection in copying your unknown-unknowns cause, it’d have to both be something we’ve never noticed before in a highly studied area, and something more disruptive than getting clocked in the jaw, which seems a tall order.
Keep in mind also that the description(s) of computation that computer science has worked out is extremely broad and far from limited to just electronic circuits. Electronics are pervasive because we have as a society sunk the world GDP (possibly several times over) into figuring out how to make them cheaply at scale. Capital investment is the only thing special about computers realized in silicon. Computer science makes no such distinction. The notion of computation is so broad that there’s little if any room to conceive of an agent that’s doing something that can’t be described as computation. Likewise the equivalence proofs are quite broad; it can arbitrarily expensive to translate across architectures, but within each class of computers, computation is computation, and that emulation is possible has proofs.
All of your examples are doing that thing where you have a privileged observer position separate and apart from anything that could be seeing or thinking within the experiment. You-the-thinker can’t simply step into the thought experiment. You-the-thinker can of course decide where to attach the camera by fiat, but that doesn’t tell us anything about the experiment, just about you and what you find intuitive.
Suppose for sake of argument your unknown unknowns mean your copy wakes up with a splitting headache and amnesia for the previous ~12 hours as if waking up from surgery. They otherwise remember everything else you remember and share your personality such that no one could notice a difference (we are positing a copy machine that more or less works). If they’re not you they have no idea who else they could be, considering they only remember being you.
The above doesn’t change much for me, and I don’t think I’d concede much more without saying you’re positing a machine that just doesn’t work very well. It’s easy for me to imagine it never being practical to copy or upload a mind, or having modest imperfections or minor differences in experience, especially at any kind of scale. Or simply being something society at large is never comfortable pursuing. It’s a lot harder to imagine it being impossible even in principle with what we already know, or can already rule out with fairly high likelihood. I don’t think most of the philosophy changes all that much if you consider merely very good copying (your friends and family can’t tell the difference; knows everything you know) vs perfect copying.
The most bullish folks on LLMs seem to think we’re going to be able to make copies good enough to be useful to businesses just off all your communications. I’m not nearly so impressed with the capabilities I’ve seen to date and it’s probably just hype. But we are already getting into an uncanny valley with the (very) low fidelity copies current AI tech can spit out—which is to say they’re already treading on the outer edge of peoples’ sense of self.