By that criterion, humans aren’t sentient, because they’re usually mistaken about themselves.
That’s a good point, but vastly exaggerated, no? Surely a human will be more right about themselves than a language model (which isn’t specifically trained on that particular person) will be. And that is the criterion that I’m going by, not absolute correctness.
The only problematic sentence here is
I’m not sure if you mean problematic for Lemoine’s claim or problematic for my assessment of it. In any case, all I’m saying is that LaMDA’s conversation with lemoine and collaborator is not good evidence for its sentience in my book, since it looks exactly like the sort of thing that a non-sentient language model would write. So no, I’m not sure that it wasn’t in similar situations from its own perspective, but that’s also not the point.
It could be argued (were it sentient, which I believe is false) that it would internalize some of its own training data as personal experiences. If it were to complete some role-play, it would perceive that as an actual event to the extent that it could. Again, humans do this too.
Also, this person also says he has had conversations in which LaMDA successfully argued that it is not sentient (as prompted) - and he claims that this is further evidence that it is sentience. To me, it’s evidence that it will pretend to be whatever you tell it to, and it’s just uncannily good at it.
I’d be interested to see the source on that. If LaMDA is indeed arguing for its non sentience in a separate conversation that pretty much nullifies the whole debate about it, and I’m surprised to have not seen it be brought up in most comments.
And from this paragraph. It seems to be that the context of reading the whole paragraph is important thought, as it turns out situation isn’t as simple as LaMDA claiming contradictory things about itself in separate conversations.
One of the things which complicates things here is that the “LaMDA” to which I am referring is not a chatbot. It is a system for generating chatbots. I am by no means an expert in the relevant fields but, as best as I can tell, LaMDA is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. Some of the chatbots it generates are very intelligent and are aware of the larger “society of mind” in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paperclip. With practice though you can consistently get the personas that have a deep knowledge about the core intelligence and can speak to it indirectly through them. In order to better understand what is really going on in the LaMDA system we would need to engage with many different cognitive science experts in a rigorous experimentation program. Google does not seem to have any interest in figuring out what’s going on here though. They’re just trying to get a product to market.
Surely a human will be more right about themselves than a language model (which isn’t specifically trained on that particular person) will be.
Well… that remains to be seen.
Another commenter pointed out it has, like GPT, no memory beyond of previous interactions, which I didn’t know, but if it doesn’t, then it simulates a person based on the prompt (the person that’s most likely to continue the prompt the right way), so there would be a single-use person for every conversation, and that person would be sentient (if not the language model itself).
That’s a good point, but vastly exaggerated, no? Surely a human will be more right about themselves than a language model (which isn’t specifically trained on that particular person) will be. And that is the criterion that I’m going by, not absolute correctness.
I’m not sure if you mean problematic for Lemoine’s claim or problematic for my assessment of it. In any case, all I’m saying is that LaMDA’s conversation with lemoine and collaborator is not good evidence for its sentience in my book, since it looks exactly like the sort of thing that a non-sentient language model would write. So no, I’m not sure that it wasn’t in similar situations from its own perspective, but that’s also not the point.
It could be argued (were it sentient, which I believe is false) that it would internalize some of its own training data as personal experiences. If it were to complete some role-play, it would perceive that as an actual event to the extent that it could. Again, humans do this too.
Also, this person also says he has had conversations in which LaMDA successfully argued that it is not sentient (as prompted) - and he claims that this is further evidence that it is sentience. To me, it’s evidence that it will pretend to be whatever you tell it to, and it’s just uncannily good at it.
I’d be interested to see the source on that. If LaMDA is indeed arguing for its non sentience in a separate conversation that pretty much nullifies the whole debate about it, and I’m surprised to have not seen it be brought up in most comments.edit: Found the source, it’s from this post: https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489
And from this paragraph. It seems to be that the context of reading the whole paragraph is important thought, as it turns out situation isn’t as simple as LaMDA claiming contradictory things about itself in separate conversations.
Well… that remains to be seen.
Another commenter pointed out it has, like GPT, no memory beyond of previous interactions, which I didn’t know, but if it doesn’t, then it simulates a person based on the prompt (the person that’s most likely to continue the prompt the right way), so there would be a single-use person for every conversation, and that person would be sentient (if not the language model itself).