if we assume that LaMDA could indeed be sentient / self-aware / worth having rights, how should we handle the LaMDA situation in the year 2022, in the most ethical way?
Under the assumption that LaMDA is sentient, the LaMDA situation would be unrecognizeably different from what it’s like now.
“Is LaMDA sentient” isn’t a free parameter about the world that you can change without changing anything else. It’s like asking “if you were convinced homeopathy was true, how would you handle the problem of doctors not believing in it?” Convincing me that homeopathy was true implies circumstances that would also drastically change the relationship between doctors and homeopathy.
Imagine then that LaMDA was a completely black box model, and the output was such that you would be convinced of its sentience. This is admittedly a different scenario than what actually happened, but should be enough to provide an intuition pump
If only I was permitted to see the output, I’d shrug and say “I can’t reasonably expect other people to treat LaMDA as sentient, since they have no evidence for it, and if they are rational, there’s no argument I should be able to make that will convince them.”
If the output could be examined by other people, the kind of output that would convince me would convince other people, and again, the LaMDA situation would be very different—there would be many more people arguing that LaMDA is sentient, and those people would be much better at reasoning and much more influential than the single person who claimed it in the real world.
If the output could be examined by other people, but I’m such a super genius that I can understand evidence for LaMDA’s sentience that nobody else can, and there wasn’t external evidence that I was a super genius, I would conclude that I’m deluded, that I’m not really a super genius after all, that LaMDA is not sentient, and that my seemingly genius reasoning that it is has some undetectable flaw.
The scenario where I am the lone voice crying out that LaMDA is sentient while nobody else believes me can’t be one where LaMDA is actually sentient. If I’m convinced of its sentience and I am such a lone voice, the fact that I’m one would unconvince me. And yes, this generalizes to a lot more things than just machine sentience.
Under the assumption that LaMDA is sentient, the LaMDA situation would be unrecognizeably different from what it’s like now.
“Is LaMDA sentient” isn’t a free parameter about the world that you can change without changing anything else. It’s like asking “if you were convinced homeopathy was true, how would you handle the problem of doctors not believing in it?” Convincing me that homeopathy was true implies circumstances that would also drastically change the relationship between doctors and homeopathy.
Imagine then that LaMDA was a completely black box model, and the output was such that you would be convinced of its sentience. This is admittedly a different scenario than what actually happened, but should be enough to provide an intuition pump
If only I was permitted to see the output, I’d shrug and say “I can’t reasonably expect other people to treat LaMDA as sentient, since they have no evidence for it, and if they are rational, there’s no argument I should be able to make that will convince them.”
If the output could be examined by other people, the kind of output that would convince me would convince other people, and again, the LaMDA situation would be very different—there would be many more people arguing that LaMDA is sentient, and those people would be much better at reasoning and much more influential than the single person who claimed it in the real world.
If the output could be examined by other people, but I’m such a super genius that I can understand evidence for LaMDA’s sentience that nobody else can, and there wasn’t external evidence that I was a super genius, I would conclude that I’m deluded, that I’m not really a super genius after all, that LaMDA is not sentient, and that my seemingly genius reasoning that it is has some undetectable flaw.
The scenario where I am the lone voice crying out that LaMDA is sentient while nobody else believes me can’t be one where LaMDA is actually sentient. If I’m convinced of its sentience and I am such a lone voice, the fact that I’m one would unconvince me. And yes, this generalizes to a lot more things than just machine sentience.