How to actually switch to an artificial body – Gradual remapping
Note: Please feel free to message me about inconsistencies, references which you think should be added (or claims you think the current references don’t back, or where there’s evidence to the contrary). But please note, this is *obviously* not meant to be definitive essay on the concept. In other words “epistemic status: low-to-mid confidence, but, I doubt you can go much beyond mid confidence on these type of subjects”
One of the oldest transhumanist tropes is the idea of transferring your brain into a computer, thus achieving immortality and beyond-human abilities… etc, etc.
I am one of the people that believe this is actually possible, as in, not in the “Oh, in the year 3000 people will be able to do this” sort of way but in the “Barring tremendous progress in accurate delivery of Okazaki factors, thymus transplants and various MTOR inhibitors and drugs that are marketed as something other than and MTOR inhibitor but are actually probably just a confounded MTOR inhibitor, mind-uploading will become my passion project once I hit my 30s”.
I Why popular portrayals of the idea are impractical
Considering this, it sort of dis-tresses me how popular culture portrays this. When I say popular culture here I really mean “every single thing I’ve ever read or seen ob the subject”… though chances are I might not be looking in the correct place ?
To give a few examples:
a) Wikipedia’s page on the subject: https://en.wikipedia.org/wiki/Mind_uploading
b) Wikipedia’s page on transhumanism in general: https://en.wikipedia.org/wiki/Transhumanism
c) One of the most popular doomed-venture/scams around this are, 2045 initiative:
d) One of the more realistic doomed-venture/scams around this area, Netcome: https://nectome.com/the-case-for-glutaraldehyde-structural-encoding-and-preservation-of-long-term-memories/
e) That one episode of black mirror that reminded everyone The Go-Go’s were a thing and might have just been an inside-joke about making a bunch of neckbeards watch a half-hour lesbian soap opera: https://www.youtube.com/watch?v=8dnn31TBoM4
Basically, the ideas presented are as follows:
Using {science fiction / magic} we’ll scan the brain (destroying it in the process) with a very high accuracy and upload it’s function to a computer that using {science fiction} to function exactly like a brain.
Using {magic} we’ll scan the brain (and leave it intact in the process) with a very high accuracy and upload it’s function to a computer using {science fiction} to function exactly like a brain.
Using {science fiction} we’ll build neurons or higher level brain components and using {science fiction} we’ll surgically replace the biological brain with these components.
Using {science fiction} we’ll keep the brain alive and well forever and make it control a robot body (also one of the stupidest ideas around, because keeping the brain alive and well forever “outside” of a human body is fundamentally harder than keeping it alive and well inside of a human body)
When I say “science fiction” I mean “Yes, this is theoretically not impossible as far as we understand the world, but it’s in the realm of dyson spheres, gray goo and digital picobots, we are a few hundreds generations of tooling away from even figuring out if we can do this and what the implications would be”.
When I say “magic” I mean “This pretty literally breaks the laws of physics/reality as agreed upon via strong experimental evidence”.
II The actual problem
I believe why most of the ideas here sound so far fetched, at leas to me, is not because the problem is fundamentally that hard, but because they refuse to tackle the actual question that is being begged here:
“What exactly are we trying to transfer and/or preserve ?”
The short TL;DR is: Consciousness.
The longer TL;DR might be something like:
The feeling of attention and awareness plus the partial control “I” seem to have over them.
My mental map of the world and the ability to gradually update it.
Experience said mental map as “the world” and having it be shared by other creatures that I see as “similar” to myself.
The feeling of remembering things and the ability to somewhat accurately remember stuff.
The “feeling” that “I” exists, the feeling of “self”.
The full picture is something like:
I’m not exactly sure what I want to transfer and/or preserve, but ideally, I’d like it to be similar to whatever is happening right now. Much the same way US judges can’t define “porn” but can know it when they see it, we can’t define “consciousness” but can know it when we are experiencing it (or… being it… rather ?).
III A brief detour into non-dualism
Now, in order to even accept the idea of being able to transfer “our brain” or “our consciousness” or “our feeling of self” or “our contiguous feeling of being a self-ware agentic entity”, we must be pretty sold on the idea of non-dualism. Basically accept the fact that our feeling of “being us” is simply the interaction of a bunch of processes going on in the brain and in the body.
I think there’s a “infantile” way to think about non-dualism, as perfectly illustrated in the Sam Harris book Free Will.
The problem with this infantile way is that it basically boils down to “the self is nothing” or “the self is everything”. I don’t think this view really holds sway, at least from a transhumanist perspective, partially because it seems unable to answer the fundamental question “Why do you then even care if you live or die ?” or even “Assuming nobody would be saddened by it, what’s the argument against just killing yourself right now ?”.
I would claim some backing to this positions by looking at Budhism, namely the fact that once it ended up defining “enlightenment” as “kinda like being dead” it had to cram in the idea of “Oh, but you can’t just kill yourself, there’s reincarnation you see, achieving enlightenment is the only way to truly dying”.
A much better nondualistic way of thinking about stuff is, in my opinion, closer to the kind of stuff Daniel Dennett talks about in this books: From Bacteria to Bach and Back, The Mind’s I and Consciousness Explained. I truly recommend all of them, especially if you have a “typical” sciency mentality… Daniel Dennett is basically to unique among modern philosopher of mind in that he seems to have a deep understanding of computers, machine learning, neuroscience, biology and the scientific method, so you don’t end up reading him and feeling like he’s main hypothesis is based on him misunderstanding a simple phenomenon or lacking information that’s “common knowledge” in a specific research niche.
I can’t really do justice to his views in a few paragraphs, hence why I’m really trying to push people to read him, but it boils down to something like:
Yes, consciousness and the feeling of “self” are not inherent to the human brain and not inherent to biological machines.
No, you don’t need a ghost-in-the-machine to have “self” or “ consciousness”.
Yes, the universe is (or at least could be) deterministic, that should change literally nothing about how we go about living life.
No, just because there isn’t a ghost-in-the-machine and/or the universe is deterministic that doesn’t mean that “there is no self”, the idea of “self” is as real as the idea of “tree” or “car” or “human”.
Basically, the non-naive view of nondualism accepts the fact that there’s nothing “special” or “magical” about human brains, but also that the way we think of “self” relatively ok and the feeling of “being ourselves” or “being conscious” is about as real as anything else in the universe.
IV What we want to preserve might not be so large
So, what we want to preserve could be basically boiled down to “the feeling of being an agentic self that experiences the world, has memory of it’s past activity, is able to create new memories and feels kinda-human, feels and to some extent is similar to other humans”.
But, what I’m sure most people would agree we don’t really need/want to preserve are things like e.g.:
The ability to control an exact replica of our body (kinda pointless if you want a new one)
The unconscious processes that make us randomly wake up at night when we hear a random sound (apologies to any reader still living in a primeval hunter gatherer tribe where hearing a loud noise at night might actually mean a predator or another human is trying to sneak up on you and kill you)
Indeed, I think once you boil it down to some thoughts experiments, people are able to give up on even more ground.
Let me get out some ideas/examples to back this up:
We can probably be “us” without various parts of the brain. The brainstem seems to be in large part associated with transmitting and receiving information from the peripheral nervous system. This includes what we’d call the “automatic nervous system” (generally speaking, we have little control and awareness of it, most people can’t consciously control their liver enzyme production) and the somatic nervous system, which is critical for that feeling of “agency” mentioned bellow, it basically gives us the ability to feel/control our body… that being said, paraplegics can loss control&awareness of most of the somatic nervous system and they still seem to be “selves”, they are as conscious as us. Now, you can’t really ‘remove’ this in a normal human, because their bodies would die, but it is arguably not a critical part of consciousness.
Many areas of the cerebral cortex aren’t essential to life, damaging/removing them will only destroy the function associated with that area, e.g. visual cortex in the occipital lobe, auditory cortex in the temporal lobe etc. These experiments are mainly done in animals, rarely done nowadays and the only human examples are stroke patients where the removal is usually not very “clean” (though they still seem to be consciousness.
Some references discussing removal/damage/role for the auditory and visual cortex which I find interesting and largely supportive of this view:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689229/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3476047/
A corpus callastomy mostly-separates the two hemispheres of the brain. Experimental evidence seems to indicate that these hemispheres are not able to communicate (see example). Both are able to hear and perform actions with their respective side of the body, but subjects seem to only be “aware” of one of the hemispheres. I think there’s an argument to be made, based on the feelings that these patients describe and their actions, that they are “conscious” of only a single hemisphere (it could be argued that the other hemisphere is also “conscious” but unable to communicate). So, assuming that you are awake during the surgery (people are awakened during it but are kept under for the “splitting the skull open and closing it shut” bits), would this change of conscious experience be analogous to “death” or so undesirable as to prefer death to it ? Based on the fact that these people lead overall happy life… I’d assume the answer is “no”. You can go even further with this an look at a total hemispherectomy (read: one hemisphere is removed), but due to the sheet amount of damage done to the brain recovery is usually less than ideal, still, patients that do recover seem pretty conscious.
Your brain can be pretty tiny and you can be conscious. Granted, there’s few examples to go by here, but that can be partially because we don’t really notice “brain shrinkage” in many cases if it’s gradual or a defect from birth. Assuming the neurosurgeons reporting this aren’t lying to get in a publication though, you can get to pretty severe levels of shrinkage: https://sci-hub.tw/https://doi.org/10.1016/S0140-6736(07)61127-1 and still maintain an IQ of 75 and not consider yourself a philosophical zombie.
So, now ask yourself:
Is a hypothetical paraplegic (1) patient with more than half of his brain mass removed (4) and another half of his brain mass “removed from consciousness” (3) with potentially even more brain mass removal from the cerebral cortexes (2)… arguably still be “conscious”, still be a “self”. A non ideal way to live… but, still a life and not so bad if the advances of science could bring you back to full speed in a few dozens of years.
Now, this is obviously a pretty big “if”, but under this optimistic model a conscious human-like brain could simply be:
very simplified version of the brainstem to gather and integrate sensations
~1/4th frontal lobe to do most of what we regard as “conscious thinking” (note: the frontal lobe includes most of the areas we associate with things like “language”)
Some analogous for the mammillary bodies, hippocampus, Amygdala and corpus striatum to store and bring back memories
Some version of at least some of the cerebral cortexes that receive, interpret and “think about” sensations.
A few extra bits and pieces that aren’t significant in terms of computations as far as the brain goes but might in some way be critical to the feeling of self.
Please assume this is an over-simplified idea of the human brain, an overly optimistic example and is not fully factually correct… that’s what I’m assuming.
My main point here isn’t to describe creating a human brain, my main point here is to say “. It seems intuitive that we don’t need all of it to be “conscious” to “experience being a self” .
V Gradual transfer
So, assuming that:
Consciousness is not magic and we can basically “tell” if we are conscious and “how conscious” we are (think the state between sleeping and awake for a picture of how you can be conscious but not fully so)
The brain is no magic and can be partially distilled into components with separate roles
The brain doesn’t fulfill all it’s roles “optimally” and certain roles (e.g. autonomous control) are not at all required for having a conscious experience.
Our advancement in hardware, software and understanding of neuroscience and neurosurgery won’t experience a singularity, they’ll advance slowly but steadily as they have done for the last 50 years.
How could we realistically try to transfer “I” into something that is not our body.
I think the answer boils down to “gradually”. Gradual transfer would essentially be step-by-step transfer of brain functionality onto peripheral device, giving said device more and more computing power and responsibility, allowing them to communicate with each other and hoping that over time you can basically transfer the sensation of “I” to said device.
Let me give a few examples of where we are currently at in terms of such peripheral device, not designs that exist on paper, devices that exist today, that are being used by humans, that can be bought by you (in some cases only with a doctor’s approval though):
a) Using the somatic nerves in part of your tongue to see, now, I’m not sure if any “sighted” person tried a device like BrainPort, but judging by the testimonies of people born with sight, it seems to work pretty much like seeing, just with a very low resolution.
b) BCI controlled robotic arms… this requires a pretty “safe” direct implants on the surface of the motor cortex. And in case you were wondering, it’s also been recently done to some extent with noninvasive sensors (sitting on to of your skin). Note, this includes “feeling” the actual arm.
c) Using computers for language processing and mathematics. This is something we all do, to some extent. Ever used a speed-reader ? Congrats, you just out-sourced/modified a big part of how your brain read language. Ever had some complex arithmetics to compute and pulled out a python shell or a phone, yeap, outsourcing your ability to do math.
Well, I’d argue that this is basically the area we should be focused on if we care about consciousness/self transfer.
For example, think about robotic arms controlled via RL training, where you can give even more complex tasks like “pick up that object”. Given the previously shown robotic arm, I see no reason we couldn’t control one where we just have to “think” about the movement and then have the arm do it (e.g. via electrodes connected to the prefrontal cortex).
If you look at various DIY hacks using noninvasive BCIs, I think this enters the realm of relatively trivial if you spent enough time training your brain and could afford spending a few hundreds of thousands on a robotic arm to play around. Here, are, four, examples.
I think a similar thing could arguably be feasible for arithmetic, where we basically attach the calculator to the prefrontal cortex, or our temporal lobe. Or attaching a page scanning + summary making device (say, using a simple camera and a fancy attention network) close to one of our language processing centers and teaching ourselves to “read a book diagonally” using it, then decide if we want to go more in-depth and continue reading it normally (or using a second speed-reader peripheral).
Ok, in saying “attach some electrodes to the front lobe” I might seem to be doing a lot of hand-waving, basically invoking the good-old “once science progresses enough this will be trivial”. But, I will point you back to examples a) and b), the electrode controlled arm and the tongue perceived camera. These seem basically just as “hand wavy”, the location they were placed in was not selected by optimizing for specific neural pathways, it was selected due to ease of access. The brain literally “learns” how to interpret the data coming to and from this and maps the in the correct locations.
The science behind how that happens is basically hand-wavy, since we don’t really know on a deep level, but that’s not bad, that’s amazing. We can skip one of the hardest steps of making a brain, understanding how it works on a deep level, because the brain itself is able to adapt, it’s able to map functionality onto different parts of itself, the nervous system and external devices. This has been known for quite a while, in that the brain can map and re-map different functions onto different parts of itself. The real question is how much can we learn to map onto more complex peripheral devices.
What seems to be lacking in current devices is a thing that we closely associate with consciousness, namely introspection and memory access&storage. Memory accessing/storage hasn’t been tested in humans partially due to physical limitations (as in, the parts related to memory are literally deep inside our brain, and reaching them with electrodes is very dangerous). But, in principle, we can reconstruct images from a brain fMRI.
fMRI resolution is a topic I’m not qualified to speak on, after digging into the subject for a previous projects it seems like it’s an area with plenty of *s.
Suffice to say, best resolutions you will found outside of labs might be something like 500μm with a frequency of 10Hz, to be generous. Based on what I’ve found (reference 1, reference 2), it seems like this could improve but there’s a “hard limit” at the hundreds or a few dozens of microns (no frequency information, let’s assume ~100HZ ?).
The image reconstruction study used 2000μm (that is 2000μm^3), and doesn’t specify the frequency (maybe no that relevant ?).
With electrode arrays (the technology I’m mainly harping on about here), it seems that practical sizes go down to 10μm per electrode , after that filtering noise is too hard for now.
But, suffice to say, I think if we can accurately reconstruct an image from an fMRI, it’s within the realm of possible right now to re-construct or read a memory with sufficient electrodes. This argument is kinda far fetched… however, considering people are studying this veyr thing in-vivo and getting results I’d say that it’s not in the realm of science fiction, closer to 10 years give or take 5.
To recapitulate:
We can connect peripheral devices via electrodes to the: autonomous nervous system, scalp and surface portions of the brain. Said devices can be controlled and in the first and last case “felt” using our brain.
In principle, I think there’s an argument this could be done with more of the brain’s functionality, but right now there’s not much of an argument for doing so. Invasive surgery that requires drilling a small hole through the skull is not warranted for “doing math better” or “reading faster.
I also believe there’s an argument that nothing stops this sort of electrode-mapping from accessing anatomically harder to reach, other than, again, high risk and low reward.
Peripheral devices that fulfill some functions we normally achieve in our brain exist (e.g. eye tracking cameras, RL trained robotic arms that can follow NL instructions, reading devices that can give a summary of the text at various resolutions or extract keywords). But to my knowledge only simplistic version (e.g. in the case of robotic arms extrapolating complex movements from simple signals) have been attached to the human brains directly, partially because the objective here was to safely and quickly treat an existing ailment, not gradual transfer.
We don’t have to be very exact when mapping brain function to a peripheral device connected to the nervous system, the brain can learn to do the remapping
VI Ok, here comes the hand-waving bit
So, assuming you can our-source a bunch of brain functionality to peripheral devices, you’d still have the issues of:
a) Those things not being “conscious” on their own, without the brain.
b) The brain being speckled with electrode arrays (we can have a few implanted relatively safely, once you go down to 100s, simple things like infections might become an issue, there’s too many points of failure, implanting through a single hole is doable, but then wire size and getting the electrodes in the correct place becomes an issue).
So how do you fix this ?
Well, you don’t map many peripherals to the brain at once, you map a few peripherals via electrode arrays implanted in specific places that allow for generic communication (e.g. thalamus, various areas on the pefrontal cortex and various areas in the visual and motor cortex). Then:
You let the brain “transfer” functionality to the peripherals. Impose rules on yourself such as “I will never do mental math again outside of my math-specific circuit” or “I will always try to interact with objects based on what my image interpretation circuit tells me, only resorting to actual vision in critical situations”. As more functionality is being transferred, you don’t increase the bandwidth, you instead increase the complexity of the peripherals.
Take the simple arithmetic peripheral for example. Instead of adding different discrete functions to it, you could try to map all the “math” functionality of your brain into it. So suddenly, you’re not giving it numbers anymore, you’re giving it an image of an urn, 5 balls extracted from the urn 3 blue – 2 black and it tells you “Probability of 2⁄5 that the next ball is black”.
Or, take the object recognition peripheral, instead of having it pipe labels or formed images to the brain, you make it classify them and pipe them to the appropriate circuit (e.g. this is an image of an equation, I shall send this to the math peripheral, or, this is an image of something I could stumble on, I shall send this to the motion-controlling peripheral).
Peripheral communication can be increased with little cost. Building on point one, the throughput to and from the brain might be limited, but throughput between peripherals doesn’t need to be. You can have the “vision” and “math” peripherals interact, such that one sees and equation, the other one solves or extract insights and when someone asks you “What’s the solution of that equation ?” or “How would you simplify that ?” you can simply query the math peripheral directly (and, provided no answer, you can “tell” the vision peripheral to send the equation to it, and, provided no answer you can start doing higher levels tasks like looking at it with your own eyes).
Introspection, the big thing that is missing here is introspection, which seems to be pretty relevant for the overall concept of “consciousness”. So how do you go about doing that ? Well, in two ways:
a) You build peripherals specifically dedicated to understanding the other peripherals, whenever you have a question about “how does this or that work, or why is this or that giving an error”, you try to phrase that as a mental question for the evaluation peripheral, which then in turn analyzes the others and sends you back an answer.
b) You build a “debugging interface” in the peripherals, which would likely be more computationally complex than the peripherals themselves, but doable. Think of a complex network for tracking, classifying and creating bounding boxes for objects, this is partially “split” into functions, such that you can try and add a secondary debugging algorithm on top, an algorithm that you can query to asks things like “roughly why did you classify this as {Y} ?” or “around what point did you decide area {X} did not contain a relevant object ?” and get answers like “Because this, this and this rough shape that come up when I apply the convolution operation strongly remind me of a {Y}” or “Because the color in area {X} is pretty homogenous and I see now shadows”.
If these seem like pretty complex questions to ask of a neural network that’s because they are. There’s some research in the area of network analyses (e.g. reference 1, reference 2), but it’s mostly focused on trying to understand the internal equation in order to optimize it or to spot the values and times during training which lead to errors.
There’s not field of analyzing neural network to interpret them in a human-brain-compatible way or of building neural network that are more human-brain-compatible in the way that they operate. But conceptually, such interpretations are possible.
See for example deep-dream style projects, which are simply extracting information from the first few layers of a network that’s feed noise… you can recognize “dog like” and “cat like” things in them (because the networks were trained on such images), from those sort of patterns it’s pretty easy to generalize to “because it reminded me of this specific shape which I associate with dogs”.
You hope that your consciousness sort of starts mapping on to this peripherals.
Again, assuming that senses, memory, feeling of location in space, agentic decision making, ability to read things, ability to map things into words and pronounce them and such functions basically accumulate to the conscious experience that “I” class “I”
.
What we’d be pretty much doing here, by out-sourcing functionality to peripherals and letting them handle more and more complex tasks, introspect about themselves, store things in a long-term memory and dictate, record and “think about” the behavior of other peripherals… is more or less gradually letting them do the things that we associate with “I”.
So provided enough space for computation (and again, these are standard computer you can connect to a power source or put in a car, we can have plenty of computing power here), might the feeling of “I” transfer to “these complex circuits external to my body that do part of my thinking, memorization and sensory perception for me”, and if this is true, we could progress by transferring more and more of the things that we hold to be “important” about “ourselves” onto the peripherals, essentially moving “I” onto the peripherals and leaving the brain as a redundant storage for some of the information, double-checker for the decisions of the peripherals and storage for information that we considered irrelevant (but might decide we care about in the future).
VII In Conclusion
The more I write about this the more I realize it’s hard to paint a good picture of this view of consciousness transfer. In that I am not very certain of my knowledge around the subject (and thus how possible it is), double-checking that knowledge is time-consuming and trying to make it “make sense” in under 5,000 words is even more difficult.
I think it’s very reasonable to say that this view of gradually giving up brain functions to peripherals, and tweaking said peripherals until we “understand them” until they “become part of us” is a much more realistic view, given current and near-future technology, than other proposed idea for consciousness transfer.
It sounds a bit more strange and convoluted, but that is exactly because it’s achievable. A Dyson Sphere is much easier to explain than a plane, but that’s not because it’s easier building a Dyson Spheres that building planes, it’s exactly because building a plane is something we can do, so we can’t just skim of “irrelevant details” because we realize they are important. We can just wave and say “Ahm, super-heat resistant carbon nanotubes and super-fast absorption solar panel and {insert SF technology}” and we have a dyson sphere. But get 0.x% of the alloy composition for the wings of a Boeing 700 wrong and suddenly it’s no a plane anymore.
Scanning a human brain and uploading it to a computer might seem a bit more “intuitive” than gradually releasing control to external devices and hoping that they “gain consciousness which we consider to be part of ourselves” but that because in the former scenario we use the hand-wavy idea of “scan a human brain”.
But scanning a brain is not currently possible, it’s so far away you can’t even name the chain of technologies required to make it possible.
Further more assuming that scanning a brain through {magic} and transferring it to a computer through {magic} and assuming that said computer than feels like “I” because {magic} is and improbable achievement. With the gradual transfer method we actually have the amazing option of stopping, if it doesn’t seem to work, if the thing we are transferring to doesn’t feel like “I” or like “consciousness”, then we can tweak it, we can change our approach or we can give up and not be dead.
Is the idea that I presented here possible with current technology (or technology that will be available in 5-10 years, e.g. 4nm transistors) ? I don’t know, I do know that it doesn’t seem that far fetched and most of the components needed are already there, there’s no need for exponential improvement, linear improvement will do. Plus, the most complex part of the idea, the remapping of function, is left to the most intelligent and well suited entity in the system, the brain itself.
It is also more plausible in that the re-mapping doesn’t have to be instant (like a scan+transfer), you could well take 30-40 years to remap your brain, adding little bits at a time, so by the time your body is failing you can just “decouple” from it, you can be “you” as a consciousness that feel contignous just in a different artificial body, running on different circuitry. We are basically switching from reading a few exabytes of data in seconds, to allowing said exabytes of data to move themselves over dozens of years and allowing pruning of irrelevant information (e.g. ability to control the old body).
Finally, my goal here is not to claim “this is the definitive strategy and the perfect description of it”, but rather, to say that this seems like a better direction to focus attention towards than the quack brain-scan ideas.
It’s still well in the realm of “transhumanist quackery”, but you could actually design experiments and devices to accomplish this, at least to get part of the way there.
Enhancing yourself is great. I would gladly plug in extra memory and better indexing algorithms to my brain. Throw in rocket boosters and indestructible bones and I’ll empty my wallet. This I get and support. But what I have never understood is when people talk about uploading or transferring their conciousness. I wouldnt mind creating copies of myself, virtual or otherwise, but it wouldnt be me me. For some reason I have a very strong fear of continuity errors. Maybe you could fool me by replacing my fleshy brain part by part with mechanical hardware and then slowly outsourcing different functionalities part by part to a cloud based solution until I no longer have any physical presence. But I fear this will just lead to a day when I will have a sudden realisation that I am not actually me me and the following existential crisis will lead to unexpected outcomes.
This fear of continuity breaks is also why I would probably stay clear of any teleporters and the like in the future.
In case you haven’t read it: https://existentialcomics.com/comic/1
But overall I agree, this “feeling” is partially the reason why I’m a fan of the insert slightly-invasive mechanical components + outsource to external device strategy. As in, I do believe it’s the most practical since it seems to be roughly doable with non-singularity levels of technology, but it’s also the one where no continuation errors can easily happen.
Minor nitpick, but in section IV I think it’s unlikely that hemispherectomy and brain shrinkage can stack linearly while preserving a human-like consciousness, because successful hemispherectomy requires the preserved half to rewire some structures to the functions normally carried out by the removed half (source) whereas halving the amount of tissue greatly reduces the resources available for that. The situation reminds me of neural net compression. We can prune or use quantization, compressing the net by some factor, but the techniques don’t stack perfectly because they eliminate some of the same redundancies.
Slightly more relevant is the evolutionary argument that any easy change to the brain that decreases its power consumption or volume must give up something very evolutionarily valuable, since brains use a huge amount of energy and increase deaths from childbirth. That is, the architecture of meat brains isn’t too inefficient. While this doesn’t refute the idea of transferring consciousness gradually, it makes me skeptical that we can do so with general-purpose hardware economically.