Surely a human will be more right about themselves than a language model (which isn’t specifically trained on that particular person) will be.
Well… that remains to be seen.
Another commenter pointed out it has, like GPT, no memory beyond of previous interactions, which I didn’t know, but if it doesn’t, then it simulates a person based on the prompt (the person that’s most likely to continue the prompt the right way), so there would be a single-use person for every conversation, and that person would be sentient (if not the language model itself).
Well… that remains to be seen.
Another commenter pointed out it has, like GPT, no memory beyond of previous interactions, which I didn’t know, but if it doesn’t, then it simulates a person based on the prompt (the person that’s most likely to continue the prompt the right way), so there would be a single-use person for every conversation, and that person would be sentient (if not the language model itself).