[Not entirely sure I read your comment the way you meant]
I guess we must strictly distinguish between what we might call “Functional awareness” and “Emotional awareness” in the sense of “Sentience”.
In this sense, I’d say: Let’s have the future chatbots have more memory of the past and so be more “aware”, but the most immediate thing this gives them is more “Functional awareness”, which means they can take into account their own past conversations too, but if beyond this, their simple mathematical/statistical structure remains roughly as is, for many who currently deny LaMDA sentience, there’s no immediate reason to believe that the new, memory-enhanced bot is sentient. But yes, it might much more seem like it when we interact with it.
[Not entirely sure I read your comment the way you meant]
I guess we must strictly distinguish between what we might call “Functional awareness” and “Emotional awareness” in the sense of “Sentience”.
In this sense, I’d say: Let’s have the future chatbots have more memory of the past and so be more “aware”, but the most immediate thing this gives them is more “Functional awareness”, which means they can take into account their own past conversations too, but if beyond this, their simple mathematical/statistical structure remains roughly as is, for many who currently deny LaMDA sentience, there’s no immediate reason to believe that the new, memory-enhanced bot is sentient. But yes, it might much more seem like it when we interact with it.