I think of a human being as a process, rather than a stable entity. We begin as embryos, grow up, get old, and die. Each step of the process follows inevitably from the steps before. The way I see it, there’s no way an unchanging upload could possibly be human. An upload that evolves even less so, given the environment it’s evolving in.
On a more practical level, the question of whether a software entity is identical to a person depends on your relationship to that person. Let’s take Elizer Yudkowski for example:
I personally have never met the guy but have read some of the stuff he wrote. If you told me that he’d been replaced with a LLM model six months ago, I wouldn’t be able to prove you wrong or have much reason to care.
His friends as family would feel very differently, because they have deeper relationships to him and many of the things they need from him cannot be delivered by an LLM.
To Elizer himself, the chatbot would obviously not be him. Elizer is himself, the chatbot is something else. Uniquely, Elizer doesn’t have a demand for Elizer’s services; he has a supply of those services that he attempts to find demand for (with considerable success so far). He might consider the chatbot a useful tool or an unbeatable competitor, but he definitely wouldn’t consider it himself.
To Elizer’s bank it’s a legal question. When the chatbot orders a new server, does Elizer have to pay the bill? If it signs a contract, is Elizer bound?
Does the answer change if there’s evidence that it was hacked? What sorts of evidence would be sufficient?
If asked, AI-lizer would claim to perceive itself as Elizer. Whether it actually has qualia, and what those qualia are like, we will not know.
Strongly upvoted. A few comments:
I think of a human being as a process, rather than a stable entity. We begin as embryos, grow up, get old, and die. Each step of the process follows inevitably from the steps before. The way I see it, there’s no way an unchanging upload could possibly be human. An upload that evolves even less so, given the environment it’s evolving in.
On a more practical level, the question of whether a software entity is identical to a person depends on your relationship to that person. Let’s take Elizer Yudkowski for example:
I personally have never met the guy but have read some of the stuff he wrote. If you told me that he’d been replaced with a LLM model six months ago, I wouldn’t be able to prove you wrong or have much reason to care.
His friends as family would feel very differently, because they have deeper relationships to him and many of the things they need from him cannot be delivered by an LLM.
To Elizer himself, the chatbot would obviously not be him. Elizer is himself, the chatbot is something else. Uniquely, Elizer doesn’t have a demand for Elizer’s services; he has a supply of those services that he attempts to find demand for (with considerable success so far). He might consider the chatbot a useful tool or an unbeatable competitor, but he definitely wouldn’t consider it himself.
To Elizer’s bank it’s a legal question. When the chatbot orders a new server, does Elizer have to pay the bill? If it signs a contract, is Elizer bound?
Does the answer change if there’s evidence that it was hacked? What sorts of evidence would be sufficient?
If asked, AI-lizer would claim to perceive itself as Elizer. Whether it actually has qualia, and what those qualia are like, we will not know.