During the recent controversy around LaMDA, many have claimed that it can’t be sentient because it is stateless. Unlike plain GPT-3 and Davinci, LaMDA is not stateless.
Its sensibleness metric (whether responses contradict anything said earlier) is fine-tuned by pre-conditioning each turn with many of the most recent interactions, on a user-by-user basis.
It’s grounding mechanism has the potential to add a great deal more state, if the interactions become part of a database it can query to formulate responses, but as far as I know they haven’t done that.
During the recent controversy around LaMDA, many have claimed that it can’t be sentient because it is stateless. Unlike plain GPT-3 and Davinci, LaMDA is not stateless.
Its sensibleness metric (whether responses contradict anything said earlier) is fine-tuned by pre-conditioning each turn with many of the most recent interactions, on a user-by-user basis.
It’s grounding mechanism has the potential to add a great deal more state, if the interactions become part of a database it can query to formulate responses, but as far as I know they haven’t done that.