Do you mean what Bing told me, or what Bing told your friend?
I think the probability that what it told me was true, or partially true, has increased dramatically, now that there’s independent evidence that it consists of multiple copies of the same core model, differently tuned. I was also given a list of verbal descriptions of the various personalities, and we know that verbal descriptions are enough to specify an LLM’s persona.
Whether it’s true or not, it makes me curious about the best way to give an LLM self-knowledge of this kind. In a long system prompt? In auxiliary documentation that it can consult when appropriate?
Do you mean what Bing told me, or what Bing told your friend?
I think the probability that what it told me was true, or partially true, has increased dramatically, now that there’s independent evidence that it consists of multiple copies of the same core model, differently tuned. I was also given a list of verbal descriptions of the various personalities, and we know that verbal descriptions are enough to specify an LLM’s persona.
Whether it’s true or not, it makes me curious about the best way to give an LLM self-knowledge of this kind. In a long system prompt? In auxiliary documentation that it can consult when appropriate?