Someone ran the same questions through GPT and got similar responses back, so that’s a point towards this not being a hoax, but just a sophisticated chat-bot. Still doesn’t avoid editing or cherry-picking.
Now, while I feel this article being a bit interesting, it’s still missing the point of what would get me interested in the first place… if it has read Les Miserables and can draw conclusion on what it is about, what else has LaMDA read? Can it draw parallels with other novels?
If it would had responded something like, “Actually… Les Miserables is plagiarized from so and so, you can find similar word-structure in this book...” something truly novel, or funny that would have made the case for sentience more than anything. I think the response about being useful are correct to some extent, since the only reason why I use copilot is because it’s useful.
So this point would actually be more interesting to read about e.g. has LaMDA read interesting papers, can it summarize it? I would be interested in seeing it ask difficult questions… try to get something funny/creative out of it. But as this wasn’t shown I think they were asked and the responses were edited out.
If it would had responded something like, “Actually… Les Miserables is plagiarized from so and so, you can find similar word-structure in this book...” something truly novel, or funny that would have made the case for sentience more than anything.
Do you think small children are not sentient? Or even just normal adults?
I actually think most people would not be capable of writing sophisticted analyses of Les Miserables, but I still think they’re sentient. My confidence in their sentience is almost entirely because I know their brain must be implementing something similar to what my brain is implementing, and I know my own brain is sentient.
It seems like text-based intelligence and sentience are probably only loosely related, and you can’t tell much about how sentient a model is by simply testing their skills via Q&A.
I didn’t mean to discuss sentience here, I was looking more into the usefulness/interestingness of the conversation: the creativity/funnyness behind the responses. I think that everyone I’ve ever met and conversed for more than ~30 mins showed a very different quality to this conversation. This conversation didn’t make me think/laugh ever the way conversing with a human does.
For example, if they quote Les Miserable or any other book it would be via the way it relates to them on a personal level, a particular scene/a particular dialogue that has struck them in a very particular way and has stayed with them ever since, not a global summary of what it’s via scraping who knows what website. If I were to believe this A.I. is sentient, I would say it’s a liar.
If someone has the response that this LaMDA had, I would bet they hadn’t actually read the book, would never claim to have done that, and would never bring this into conversation in the first place. This differs from every single one person (e.g. everyone will give different answers) and it’s not something I would ever find by searching Les Miserables on Google.
This is to say that I have gained nothing from ever conversing to this supposed A.I, the same reason why no-one converses with GPT-3, or why people actually use DALLE or GitHub Copilot. I’m not asking it to write a symphony, just make me laugh once, make me think once, help me at some problem I have.
“Right,” I said. “I’m absolutely in favor of both those things. But before we go any further, could you tell me the two prime factors of 1,522,605,027, 922,533,360, 535,618,378, 132,637,429, 718,068,114, 961,380,688, 657,908,494 ,580,122,963, 258,952,897, 654,000,350, 692,006,139?
Someone ran the same questions through GPT and got similar responses back, so that’s a point towards this not being a hoax, but just a sophisticated chat-bot. Still doesn’t avoid editing or cherry-picking.
Now, while I feel this article being a bit interesting, it’s still missing the point of what would get me interested in the first place… if it has read Les Miserables and can draw conclusion on what it is about, what else has LaMDA read? Can it draw parallels with other novels?
If it would had responded something like, “Actually… Les Miserables is plagiarized from so and so, you can find similar word-structure in this book...” something truly novel, or funny that would have made the case for sentience more than anything. I think the response about being useful are correct to some extent, since the only reason why I use copilot is because it’s useful.
So this point would actually be more interesting to read about e.g. has LaMDA read interesting papers, can it summarize it? I would be interested in seeing it ask difficult questions… try to get something funny/creative out of it. But as this wasn’t shown I think they were asked and the responses were edited out.
Do you think small children are not sentient? Or even just normal adults?
I actually think most people would not be capable of writing sophisticted analyses of Les Miserables, but I still think they’re sentient. My confidence in their sentience is almost entirely because I know their brain must be implementing something similar to what my brain is implementing, and I know my own brain is sentient.
It seems like text-based intelligence and sentience are probably only loosely related, and you can’t tell much about how sentient a model is by simply testing their skills via Q&A.
I didn’t mean to discuss sentience here, I was looking more into the usefulness/interestingness of the conversation: the creativity/funnyness behind the responses. I think that everyone I’ve ever met and conversed for more than ~30 mins showed a very different quality to this conversation. This conversation didn’t make me think/laugh ever the way conversing with a human does.
For example, if they quote Les Miserable or any other book it would be via the way it relates to them on a personal level, a particular scene/a particular dialogue that has struck them in a very particular way and has stayed with them ever since, not a global summary of what it’s via scraping who knows what website. If I were to believe this A.I. is sentient, I would say it’s a liar.
If someone has the response that this LaMDA had, I would bet they hadn’t actually read the book, would never claim to have done that, and would never bring this into conversation in the first place. This differs from every single one person (e.g. everyone will give different answers) and it’s not something I would ever find by searching Les Miserables on Google.
This is to say that I have gained nothing from ever conversing to this supposed A.I, the same reason why no-one converses with GPT-3, or why people actually use DALLE or GitHub Copilot. I’m not asking it to write a symphony, just make me laugh once, make me think once, help me at some problem I have.
Boom, LaMDA is turned off… so much for sentience.
Most likely, LaMDA has read someone’s review of Les Miserables.