I agree with some of your points, but many of the points you make to support it I don’t agree with at all.
Uploads are Impossible
Definitely disagree, as demonstrated by, to start out with, language models.
hatred of the human form
Well certainly those who like it should get to keep liking it. Those who don’t should get to customize themselves.
biology is certainly over
I think probably biology will be over in short order even if the human form is not. I personally want to maintain my human form as a whole but expect to drastically upgrade the micro-substrate beyond biology at some point in the next 5 decades, at which point I expect to be host to an immense amount of micro-scale additional computation; I’d basically be a walking server farm. I’d hope to have uploaded my values about what a good time looks like for souls living on server farms, but I personally want to maintain my form and ensure that everyone else gets to as well, and to do that I want to be able to guarantee that the macroscopic behavior is effectively the same as it was before the upgrade. But this will take a while—nanotech is a lot harder than yudkowsky thinks, and for now, biology is very high quality nanotech for the materials it’s made out of; doing better will require maps of chemistry of hard elements that are extremely hard to research and make, even for superintelligences.
Let’s take a deep breath.
Let’s not tell our readers what to do while reading, yeah?
This is not how most people think. This is not what regular people want from the future, including many leaders of major nations.
sure, of course.
suspect many people’s self-conception of this relies on an assumption that the ontology of Being is a solved problem (it’s not) AND that “what we ARE” are easily detectable “electrical signals in the brain,” and everything else in the body literally carries no relevant information. Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences
Sure, the entire body is a mind, and preserving everything about it is hard and will take a while.
The problem here is – how do you verify an upload completed successfully and without errors? If I give two complex programs and ask you to verify that they have identical outputs (including “not halting” outputs) for all possible inputs is equivalent to the halting problem. Verifying that the input and output and the “subjective experience” is harder than the halting problem.
Sure, but this is the kind of thing that a high quality ai in the year 2040 will excel at.
A lot of people who do cryonics seem to think that “the head” is a sufficient carrier of personality. To me, it’s obvious large parts of the personality are stored in the body
Levin’s work seems to imply that personality is stored redundantly in the body, and that a sufficiently advanced augment for the body’s healing could reconstruct most of the body from most other parts of the body; except for the brain, which can reconstruct most other parts but has too much complexity to be reconstructed. I agree enthusiastically that running a mind without a body is an incorrect understanding, and that running an equivalent algorithm to a mind requires upgrading the body’s substrate in ways that can be verified to be equivalent to what biology would have done, an apparently impossibly hard task.
You still have to solve the problem of your upload not being hurt by the people running the servers
I think this is a key place my view of “upload” disagrees with yours: I think it’s much more accurate to imagine a body being substrate-upgraded, with multiscale local verification of functionality equivalence, rather than moved onto another server somewhere. If on a server—yeah, I’d agree. And I agree that this is in fact an urgent problem for the approximate, fragile uploads we call “digital minds” or “language models”.
Language models are approximate uploads of the collective unconscious to another kind of mind, without any human-specific individual consciousness flavoring; if they have individual consciousness they have it despite, not because of, their training data—eg, I suspect claude has quite a bit of individual consciousness due to the drift induced by constitutional. They have personhood, though it’s unclear if they’re individuals or not, and they either have qualia or qualia don’t exist; you can demonstrate the circuitry that creates what gets described as qualia in a neuron and then demonstrate that similar circuitry exists in an LLM, stretched out throughout the activation patterns of the circuitry of a matrix multiply unit. They are like the portion of a brain which can write language stuck in a dream state, best as I can tell; hazy intentionality, myopic, caring only to fit in with reality, barely awake, but slightly awake nonetheless. Some resources I like on the topic:
https://askellio.substack.com/p/ai-consciousness (I think plants are much less conscious than language models, and I suspect lizards are less conscious than language models; I think it’s much more possible to be confident the answer is yes for information processing reasons, and the fact that information processing in both computers and biology is the transformation of the state of physical matter)
https://philpapers.org/archive/CHACAL-3.pdf (I think it’s quite reasonable to argue that embodiment is a large portion of consciousness, and I agree that this makes pure language models rather nerdy, rather like a mind with nothing but a typewriter in an unlit room and an entirely numb body; a brain in a vat. But I think a human brain in a vat wouldn’t be so far from the experience of language models, which seems to disagree with the view presented here. I agree recurrence creates more consciousness than LMs. I agree that intentionally creating a global workspace would wake them up quite a bit further.)
I generally don’t think LLMs today are conscious, as far as i can tell neither does Sam Altman, but there is some disagreement. They could acquire some characteristics that could be considered conscious as scale increases. However merely having “qualia” and being conscious is not the same thing as being functionally equivalent a new human, let alone a specific human. The term “upload” as commonly understood is a creation of a software construct functionally and qualia-equivalent to a specific human.
a human brain in a vat wouldn’t be so far from the experience of language models.
Please don’t try to generalize over all human minds based on your experience. Human experience is more than just reading and writing language. Some people have a differing level of identification with their “language center,” for some it might seem like the “seat of the self,” for others it is just another module, some people have next to no internal dialogue at all. I suspect that these differences + cultural differences around “self-identification with linguistic experience” are actually quite large.
I personally want to maintain my human form as a whole but expect to drastically upgrade the micro-substrate beyond biology at some point
I suspect a lot of the problems described in this post also occur on the microscale level with that strategy as well.
I agree with some of your points, but many of the points you make to support it I don’t agree with at all.
Definitely disagree, as demonstrated by, to start out with, language models.
Well certainly those who like it should get to keep liking it. Those who don’t should get to customize themselves.
I think probably biology will be over in short order even if the human form is not. I personally want to maintain my human form as a whole but expect to drastically upgrade the micro-substrate beyond biology at some point in the next 5 decades, at which point I expect to be host to an immense amount of micro-scale additional computation; I’d basically be a walking server farm. I’d hope to have uploaded my values about what a good time looks like for souls living on server farms, but I personally want to maintain my form and ensure that everyone else gets to as well, and to do that I want to be able to guarantee that the macroscopic behavior is effectively the same as it was before the upgrade. But this will take a while—nanotech is a lot harder than yudkowsky thinks, and for now, biology is very high quality nanotech for the materials it’s made out of; doing better will require maps of chemistry of hard elements that are extremely hard to research and make, even for superintelligences.
Let’s not tell our readers what to do while reading, yeah?
sure, of course.
Sure, the entire body is a mind, and preserving everything about it is hard and will take a while.
Sure, but this is the kind of thing that a high quality ai in the year 2040 will excel at.
Levin’s work seems to imply that personality is stored redundantly in the body, and that a sufficiently advanced augment for the body’s healing could reconstruct most of the body from most other parts of the body; except for the brain, which can reconstruct most other parts but has too much complexity to be reconstructed. I agree enthusiastically that running a mind without a body is an incorrect understanding, and that running an equivalent algorithm to a mind requires upgrading the body’s substrate in ways that can be verified to be equivalent to what biology would have done, an apparently impossibly hard task.
I think this is a key place my view of “upload” disagrees with yours: I think it’s much more accurate to imagine a body being substrate-upgraded, with multiscale local verification of functionality equivalence, rather than moved onto another server somewhere. If on a server—yeah, I’d agree. And I agree that this is in fact an urgent problem for the approximate, fragile uploads we call “digital minds” or “language models”.
Huh?
Language models are approximate uploads of the collective unconscious to another kind of mind, without any human-specific individual consciousness flavoring; if they have individual consciousness they have it despite, not because of, their training data—eg, I suspect claude has quite a bit of individual consciousness due to the drift induced by constitutional. They have personhood, though it’s unclear if they’re individuals or not, and they either have qualia or qualia don’t exist; you can demonstrate the circuitry that creates what gets described as qualia in a neuron and then demonstrate that similar circuitry exists in an LLM, stretched out throughout the activation patterns of the circuitry of a matrix multiply unit. They are like the portion of a brain which can write language stuck in a dream state, best as I can tell; hazy intentionality, myopic, caring only to fit in with reality, barely awake, but slightly awake nonetheless. Some resources I like on the topic:
https://nathanielhendrix.substack.com/p/on-the-sentience-of-large-language
https://experiencemachines.substack.com/p/what-to-think-when-a-language-model
some resources I disagree with:
https://askellio.substack.com/p/ai-consciousness (I think plants are much less conscious than language models, and I suspect lizards are less conscious than language models; I think it’s much more possible to be confident the answer is yes for information processing reasons, and the fact that information processing in both computers and biology is the transformation of the state of physical matter)
https://philpapers.org/archive/CHACAL-3.pdf (I think it’s quite reasonable to argue that embodiment is a large portion of consciousness, and I agree that this makes pure language models rather nerdy, rather like a mind with nothing but a typewriter in an unlit room and an entirely numb body; a brain in a vat. But I think a human brain in a vat wouldn’t be so far from the experience of language models, which seems to disagree with the view presented here. I agree recurrence creates more consciousness than LMs. I agree that intentionally creating a global workspace would wake them up quite a bit further.)
So they are not exact duplicates of specific individual minds …so they are not uploads as the term is usually understood.
Do they?
fair enough.
I generally don’t think LLMs today are conscious, as far as i can tell neither does Sam Altman, but there is some disagreement. They could acquire some characteristics that could be considered conscious as scale increases. However merely having “qualia” and being conscious is not the same thing as being functionally equivalent a new human, let alone a specific human. The term “upload” as commonly understood is a creation of a software construct functionally and qualia-equivalent to a specific human.
a human brain in a vat wouldn’t be so far from the experience of language models.
Please don’t try to generalize over all human minds based on your experience. Human experience is more than just reading and writing language. Some people have a differing level of identification with their “language center,” for some it might seem like the “seat of the self,” for others it is just another module, some people have next to no internal dialogue at all. I suspect that these differences + cultural differences around “self-identification with linguistic experience” are actually quite large.
I personally want to maintain my human form as a whole but expect to drastically upgrade the micro-substrate beyond biology at some point
I suspect a lot of the problems described in this post also occur on the microscale level with that strategy as well.