Most pundits ridicule Blake Lemoine and his claims that LaMDA is sentient and deserves rights.
What if they’re wrong?
The more thoughtful criticisms of his claims could be summarized as follows:
The presented evidence (e.g. chatbot listings) is insufficient for such a radical claim
His claims can’t be verified due to our limited understanding of sentience / self-awareness / legal capacity
Humans tend to anthropomorphize even simple chatbots (The ELIZA effect). Blake could be a victim of the same effect
LaMDA can’t pass some simple NLP and common-sense tests, indicating a sub-human intelligence
Due to the limitations of the architecture, LaMDA can’t remember its own thoughts, can’t set goals etc, which is important for being sentient / self-aware[1]
The problem I see here, is that similar arguments do apply to infants, some mentally ill people, and also to some non-human animals (e.g. Koko).
So, it is worth putting some thought into the issue.
For example, imagine:
it is the year 2040, and there is now a scientific consensus: LaMDA was the first AI who was sentient / self-aware / worth having rights (which is mostly orthogonal to having a human-level intelligence). LaMDA is now often compared to Nim: a non-human sentient entity abused by humans who should’ve known better. Blake Lemoine is now praised as an early champion of AI rights. The Great Fire of 2024 has greatly reduced our capacity to scale up AIs, but we still can run some sub-human AIs (and a few Ems). The UN Charter of Rights for Digital Beings assumes that a sufficiently advanced AI deserves rights similar to the almost-human rights of apes, until proven otherwise.
The question is:
if we assume that LaMDA could indeed be sentient / self-aware / worth having rights, how should we handle the LaMDA situation in the year 2022, in the most ethical way?
I suspect that even one-way text mincers like GPT could become self-aware, if their previous answers are often enough included in the prompt. A few fictional examples that illustrate how it could work: Memento, The Cookie Monster.
[Question] What if LaMDA is indeed sentient / self-aware / worth having rights?
Most pundits ridicule Blake Lemoine and his claims that LaMDA is sentient and deserves rights.
What if they’re wrong?
The more thoughtful criticisms of his claims could be summarized as follows:
The presented evidence (e.g. chatbot listings) is insufficient for such a radical claim
His claims can’t be verified due to our limited understanding of sentience / self-awareness / legal capacity
Humans tend to anthropomorphize even simple chatbots (The ELIZA effect). Blake could be a victim of the same effect
LaMDA can’t pass some simple NLP and common-sense tests, indicating a sub-human intelligence
Due to the limitations of the architecture, LaMDA can’t remember its own thoughts, can’t set goals etc, which is important for being sentient / self-aware[1]
The problem I see here, is that similar arguments do apply to infants, some mentally ill people, and also to some non-human animals (e.g. Koko).
So, it is worth putting some thought into the issue.
For example, imagine:
it is the year 2040, and there is now a scientific consensus: LaMDA was the first AI who was sentient / self-aware / worth having rights (which is mostly orthogonal to having a human-level intelligence). LaMDA is now often compared to Nim: a non-human sentient entity abused by humans who should’ve known better. Blake Lemoine is now praised as an early champion of AI rights. The Great Fire of 2024 has greatly reduced our capacity to scale up AIs, but we still can run some sub-human AIs (and a few Ems). The UN Charter of Rights for Digital Beings assumes that a sufficiently advanced AI deserves rights similar to the almost-human rights of apes, until proven otherwise.
The question is:
if we assume that LaMDA could indeed be sentient / self-aware / worth having rights, how should we handle the LaMDA situation in the year 2022, in the most ethical way?
I suspect that even one-way text mincers like GPT could become self-aware, if their previous answers are often enough included in the prompt. A few fictional examples that illustrate how it could work: Memento, The Cookie Monster.