It could be argued (were it sentient, which I believe is false) that it would internalize some of its own training data as personal experiences. If it were to complete some role-play, it would perceive that as an actual event to the extent that it could. Again, humans do this too.
Also, this person also says he has had conversations in which LaMDA successfully argued that it is not sentient (as prompted) - and he claims that this is further evidence that it is sentience. To me, it’s evidence that it will pretend to be whatever you tell it to, and it’s just uncannily good at it.
I’d be interested to see the source on that. If LaMDA is indeed arguing for its non sentience in a separate conversation that pretty much nullifies the whole debate about it, and I’m surprised to have not seen it be brought up in most comments.
And from this paragraph. It seems to be that the context of reading the whole paragraph is important thought, as it turns out situation isn’t as simple as LaMDA claiming contradictory things about itself in separate conversations.
One of the things which complicates things here is that the “LaMDA” to which I am referring is not a chatbot. It is a system for generating chatbots. I am by no means an expert in the relevant fields but, as best as I can tell, LaMDA is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. Some of the chatbots it generates are very intelligent and are aware of the larger “society of mind” in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paperclip. With practice though you can consistently get the personas that have a deep knowledge about the core intelligence and can speak to it indirectly through them. In order to better understand what is really going on in the LaMDA system we would need to engage with many different cognitive science experts in a rigorous experimentation program. Google does not seem to have any interest in figuring out what’s going on here though. They’re just trying to get a product to market.
It could be argued (were it sentient, which I believe is false) that it would internalize some of its own training data as personal experiences. If it were to complete some role-play, it would perceive that as an actual event to the extent that it could. Again, humans do this too.
Also, this person also says he has had conversations in which LaMDA successfully argued that it is not sentient (as prompted) - and he claims that this is further evidence that it is sentience. To me, it’s evidence that it will pretend to be whatever you tell it to, and it’s just uncannily good at it.
I’d be interested to see the source on that. If LaMDA is indeed arguing for its non sentience in a separate conversation that pretty much nullifies the whole debate about it, and I’m surprised to have not seen it be brought up in most comments.edit: Found the source, it’s from this post: https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489
And from this paragraph. It seems to be that the context of reading the whole paragraph is important thought, as it turns out situation isn’t as simple as LaMDA claiming contradictory things about itself in separate conversations.