I can confirm seeing this one. I had a guitar teacher who had short nails on one hand, and long nails on the other.
Brent
What’s the empirical or physical content of this belief?
I’ll take a stab at explaining this with a simple thought experiment.
Say there are two people, Alice and Bob, each with their own unique brain states.
If Alice’s brain state changes slightly, from getting older, learning something new, losing some neurons to a head injury, etc, she will still be Alice. Changing, adding, or removing a neuron does not change this fact.Now what if instead part of her brain state was changing slowly to match Bob’s? You could think of this as incrementally removing Alice’s neurons and replacing them with a copy of Bob’s, I find it hard to believe that any discrete small change will make Alice’s conscious experience suddenly disappear, and by the end of it she will have the exact same brain state as Bob.
If you believe that when Bob steps into a teleporter that also makes a copy, they are both the same Bob, then it is reasonable to assume that this transformed Alice is also Bob. Then for the same reason your older self is the same “self” as your younger self, the younger Alice is also Bob. The transition between their brain states doesn’t even need to happen, it just has to be possible. From here it is easy to extrapolate that all brain states are the same “self”.
I agree with you that the LLM’s job is harder, but I think that has a lot to do with the task being given to the human vs. LLM being different in kind. The internal states of a human (thoughts, memories, emotions, etc) can be treated as inputs in the same way vision and sound are. A lot of the difficulty will come from the LLM being given less information, similar to how a human who is blindfolded will have a harder time performing a task where vision would inform what state they are in. I would expect if an LLM was given direct access to the same memories, physical senations, emotions, etc of a human (making the task more equivalent) it could have a much easier time emulating them.
Another analogy for what I’m trying to articulate, imagine a set of twins swapping jobs for the day, they would have a much harder time trying to imitate the other than imitate themselves. Similarly, a human will have a harder time trying to make the same decisions an LLM would make, than the LLM just being itself. The extra modelling of missing information will always make things harder. Going back to your Einstein example, this has the interesting implication that the computational task of an LLM emulating Einstein may be a harder task than an LLM just being a more intelligent agent than Einstein.
It’s not just that it implies faster-than-light communication, it’s that it implies communication at all.
Experiencing both bodies at the same time, you will be able to take actions in one body that you wouldn’t have done without the other one. It seems odd that with no biological changes to your brain, the mere existence of another similar brain changes how this one functions. Why would they be linked? This implies the observer is some external soul-like thing that can manipulate matter. If you can’t take actions based on your conscious experience, it implies the observer is dissociated from the brain and not created from it or able to interact with it.
I can definitely imagine a world where this is true, but it seems extremely unlikely based on what we currently know.