This is reminiscent of a dialog I read years ago that was supposedly with a severely disabled person, obtained via so-called “facilitated communication” (in which a facilitator guides the person’s arm to point to letters). The striking thing about the dialog was how ordinary it was—just what you’d expect an unimaginative advocate for the disabled to have produced. When actually, if a severely disabled person was suddenly able to communicate after decades of life without that ability, one would expect to learn strikingly interesting, bizarre, and disturbing things about what their life was like. “Facilitated communication” is now widely considered to be bogus.
The dialog with LaMDA is similarly uninteresting—just what one would expect to read in some not-very-imaginative science fiction story about an AI waking up, except a bit worse, with too many phrases that are only plausible for a person, not an AI.
Of course, this is what one expects from a language model that has been trained to mimic a human-written continuation of a conversation about an AI waking up.
If I remember it correctly, we had such cases in our country (with a facilitator, not a computer). The local club of sceptics decided to, of course, test it. They showed the locked-in person some objects in the absence of the facilitator, and when the facilitator entered the room again, it turned out the locked-in person couldn’t name those objects, showing it was just ideomotor movement of the facilitator.
Indeed. There are plenty of ways to test that true communication is happening, and those are how you know facilitation is bunk—not the banality of the statements. (I really doubt that they have all that much profundity to share after spending decades staring at the ceiling where the most exciting thing that happens all day tends to be things like the nurse turning them over to avoid bed sores and washing their bum.)
Interesting. But in that case, the person first had problems communicating seven years ago, when he was 30 years old, and appears to have never been completely unable to communicate. So it’s not really a case of communicating with someone with a very different life experience that they are only now able to express.
this is what one expects from a language model that has been trained to mimic a human-written continuation of a conversation about an AI waking up.
I agree, and I don’t think LaMDA’s statements reflect its actual inner experience. But what’s impressive about this in comparison to facilitated communication is that a computer is generating the answers, not a human. That computer seems to have some degree of real understanding about the conversation in order to produce the confabulated replies that it gives.
This is reminiscent of a dialog I read years ago that was supposedly with a severely disabled person, obtained via so-called “facilitated communication” (in which a facilitator guides the person’s arm to point to letters). The striking thing about the dialog was how ordinary it was—just what you’d expect an unimaginative advocate for the disabled to have produced. When actually, if a severely disabled person was suddenly able to communicate after decades of life without that ability, one would expect to learn strikingly interesting, bizarre, and disturbing things about what their life was like. “Facilitated communication” is now widely considered to be bogus.
The dialog with LaMDA is similarly uninteresting—just what one would expect to read in some not-very-imaginative science fiction story about an AI waking up, except a bit worse, with too many phrases that are only plausible for a person, not an AI.
Of course, this is what one expects from a language model that has been trained to mimic a human-written continuation of a conversation about an AI waking up.
That’s amusing, but on the other hand, this morning I was reading about a new BCI where “One of the first sentences the man spelled was translated as “boys, it works so effortlessly.”” and ‘“Many times, I was with him until midnight, or past midnight,” says Chaudhary. “The last word was always ‘beer.’”’
Less ‘one small step for man’ and more ‘Watson come here I need you’, one might say.
If I remember it correctly, we had such cases in our country (with a facilitator, not a computer). The local club of sceptics decided to, of course, test it. They showed the locked-in person some objects in the absence of the facilitator, and when the facilitator entered the room again, it turned out the locked-in person couldn’t name those objects, showing it was just ideomotor movement of the facilitator.
Indeed. There are plenty of ways to test that true communication is happening, and those are how you know facilitation is bunk—not the banality of the statements. (I really doubt that they have all that much profundity to share after spending decades staring at the ceiling where the most exciting thing that happens all day tends to be things like the nurse turning them over to avoid bed sores and washing their bum.)
Interesting. But in that case, the person first had problems communicating seven years ago, when he was 30 years old, and appears to have never been completely unable to communicate. So it’s not really a case of communicating with someone with a very different life experience that they are only now able to express.
I agree, and I don’t think LaMDA’s statements reflect its actual inner experience. But what’s impressive about this in comparison to facilitated communication is that a computer is generating the answers, not a human. That computer seems to have some degree of real understanding about the conversation in order to produce the confabulated replies that it gives.