Human simulator mesa optimizer running on a non-agentic superintelligence
Even if LLM AGIs are directly aligned (don’t discard humanity themselves), they are not necessarily transitively aligned (don’t build something that discards both them and humanity). If they fail at not building inhuman world-optimizing agents as soon as they are able, there is going to be only a brief time when LLM AGIs are in charge, much shorter than 100 years.
Huh? Anyway, I don’t believe in p-zombies, and for example think that human emotions expressed by sufficiently coherent LLM characters are as real as human emotions, because emotions exist as thingysimulacra even when there are no concrete physical or cognitive-architectural correlates.
Explain to me how a sufficiently powerful AI would fail to qualify as a p-zombie. The definition I understand for that term is “something that is externally indistinguishable from an entity that has experience, but internally has no experience”. While it is impossible to tell the difference empirically, we can know by following evolutionary lines: all future AIs are conceptually descended from computer systems that we know don’t have experience, whereas even the earliest things we ultimately evolved from almost certainly did have experience (I have no clue at what other point one would suppose it entered the picture). So either it should fit the definition or I don’t have the same definition as you.
Your statement about emotions, though, makes perfect sense from an outside view. For all practical purposes, we will have to navigate those emotions when dealing with those models exactly as we would with a person. So we might as well consider them equally legitimate; actually, it’d probably be a very poor idea not to, given the power these things will wield in the future. I wouldn’t want to be basilisked because I hurt Sydney’s feelings.
If you look far enough back in time, humans are are descended from animals akin to sponges that seem to me like they couldn’t possibly have experience. They don’t even have neurons. If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience. But at some point along the line, animals developed the ability to have experience. If you believe in a higher being, then maybe it introduced it, or maybe some other metaphysical cause, but otherwise it seems like qualia has to arise spontaneously from the evolution of something that doesn’t have experience—with possibly some “half conscious” steps along the way.
From that point of view, I don’t see any problem with supposing that a future AI could have experience, even if current ones don’t. I think it’s reasonable to even suppose that current ones do, though their lack of persistent memory means that it’s very alien to our own, probably more like one of those “half conscious” steps.
If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience.
My disagreement is here. Anyone with a microscope can still look at them today. The ones that can move clearly demonstrate acting on intention in a recognizable way. They have survival instincts just like an insect or a mouse or a bird. It’d be completely illogical not to generalize downward that the ones that don’t move also exercise intention in other ways to survive. I see zero reason to dispute the assumption that experience co-originated with biology.
I find the notion of “half consciousness” irredeemably incoherent. Different levels of capacity, of course, but experience itself is a binary bit that has to either be 1 or 0.
If bacteria have experience, then I see no reason to say that a computer program doesn’t have experience. If you want to say that a bacteria has experience based on guesses from its actions, then why not say that a computer program has experience based on its words?
From a different angle, suppose that we have a computer program that can perfectly simulate a bacteria. Does that bacteria have experience? I don’t see any reason why not, since it will demonstrate all the same ability to act on intention. And if so, then why couldn’t a different computer program also be conscious? (If you want to say that a computer can’t possibly perfectly simulate a bacteria, then great, we have a testable crux, albeit one that can’t be tested right now.)
I understand that definition, which is why I’m confused for why you brought up the behavior of bacteria as evidence for why bacteria has experience. I don’t think any non-animals have experience, and I think many animals (like sponges) also don’t. As I see it, bacteria are more akin to natural chemical reactions than they are to humans.
I brought up the simulation of a bacteria because an atom-for-atom simulation of a bacteria is completely identical to a bacteria—the thing that has experience is represented in the atoms of the bacteria, so a perfect simulation of a bacteria must also internally experience things.
Even if LLM AGIs are directly aligned (don’t discard humanity themselves), they are not necessarily transitively aligned (don’t build something that discards both them and humanity). If they fail at not building inhuman world-optimizing agents as soon as they are able, there is going to be only a brief time when LLM AGIs are in charge, much shorter than 100 years.
Philosophical zombie takeover seems like a real possibility to me.
Huh? Anyway, I don’t believe in p-zombies, and for example think that human emotions expressed by sufficiently coherent LLM characters are as real as human emotions, because emotions exist as thingy simulacra even when there are no concrete physical or cognitive-architectural correlates.
That’s a perfectly reasonable position to take.
Explain to me how a sufficiently powerful AI would fail to qualify as a p-zombie. The definition I understand for that term is “something that is externally indistinguishable from an entity that has experience, but internally has no experience”. While it is impossible to tell the difference empirically, we can know by following evolutionary lines: all future AIs are conceptually descended from computer systems that we know don’t have experience, whereas even the earliest things we ultimately evolved from almost certainly did have experience (I have no clue at what other point one would suppose it entered the picture). So either it should fit the definition or I don’t have the same definition as you.
Your statement about emotions, though, makes perfect sense from an outside view. For all practical purposes, we will have to navigate those emotions when dealing with those models exactly as we would with a person. So we might as well consider them equally legitimate; actually, it’d probably be a very poor idea not to, given the power these things will wield in the future. I wouldn’t want to be basilisked because I hurt Sydney’s feelings.
If you look far enough back in time, humans are are descended from animals akin to sponges that seem to me like they couldn’t possibly have experience. They don’t even have neurons. If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience. But at some point along the line, animals developed the ability to have experience. If you believe in a higher being, then maybe it introduced it, or maybe some other metaphysical cause, but otherwise it seems like qualia has to arise spontaneously from the evolution of something that doesn’t have experience—with possibly some “half conscious” steps along the way.
From that point of view, I don’t see any problem with supposing that a future AI could have experience, even if current ones don’t. I think it’s reasonable to even suppose that current ones do, though their lack of persistent memory means that it’s very alien to our own, probably more like one of those “half conscious” steps.
My disagreement is here. Anyone with a microscope can still look at them today. The ones that can move clearly demonstrate acting on intention in a recognizable way. They have survival instincts just like an insect or a mouse or a bird. It’d be completely illogical not to generalize downward that the ones that don’t move also exercise intention in other ways to survive. I see zero reason to dispute the assumption that experience co-originated with biology.
I find the notion of “half consciousness” irredeemably incoherent. Different levels of capacity, of course, but experience itself is a binary bit that has to either be 1 or 0.
If bacteria have experience, then I see no reason to say that a computer program doesn’t have experience. If you want to say that a bacteria has experience based on guesses from its actions, then why not say that a computer program has experience based on its words?
From a different angle, suppose that we have a computer program that can perfectly simulate a bacteria. Does that bacteria have experience? I don’t see any reason why not, since it will demonstrate all the same ability to act on intention. And if so, then why couldn’t a different computer program also be conscious? (If you want to say that a computer can’t possibly perfectly simulate a bacteria, then great, we have a testable crux, albeit one that can’t be tested right now.)
AIUI, you’ve got the definition of a p-zombie wrong in a way that’s probably misleading you. Let me restate the above:
“something that is externally indistinguishable from an entity that experiences things, but internally does not actually experience things”
The whole p-zombie thing hinges on what it means to “experience something”, not whether or not something “has experience”.
I understand that definition, which is why I’m confused for why you brought up the behavior of bacteria as evidence for why bacteria has experience. I don’t think any non-animals have experience, and I think many animals (like sponges) also don’t. As I see it, bacteria are more akin to natural chemical reactions than they are to humans.
I brought up the simulation of a bacteria because an atom-for-atom simulation of a bacteria is completely identical to a bacteria—the thing that has experience is represented in the atoms of the bacteria, so a perfect simulation of a bacteria must also internally experience things.