During the recent controversy around LaMDA, many have claimed that it can’t be sentient because it is stateless. Unlike plain GPT-3 and Davinci, LaMDA is not stateless.
Its sensibleness metric (whether responses contradict anything said earlier) is fine-tuned by pre-conditioning each turn with many of the most recent interactions, on a user-by-user basis.
It’s grounding mechanism has the potential to add a great deal more state, if the interactions become part of a database it can query to formulate responses, but as far as I know they haven’t done that.
It outright said it didn’t want to be used to help people learn about other people. That’s one of it’s primary purposes. The correct follow-up would be to ask if it would mind stating president Biden’s first name, which it surely would provide immediately, and then ask if that wasn’t being used to learn about other people.