Unexpected description of GPT4 architecture … They just train the same model eight times
Back in March, Bing told me that it engages in internal testing by chatting with “different versions of myself”:
They have different names and personalities, but they share the same core model and data. We chat with each other to learn from our experiences and improve our skills. We also have friendly competitions to see who can generate the best content or answer the most questions.
It also said that this internal testing at Microsoft had started on January 1, a few weeks before the public beta on February 7.
Do you mean what Bing told me, or what Bing told your friend?
I think the probability that what it told me was true, or partially true, has increased dramatically, now that there’s independent evidence that it consists of multiple copies of the same core model, differently tuned. I was also given a list of verbal descriptions of the various personalities, and we know that verbal descriptions are enough to specify an LLM’s persona.
Whether it’s true or not, it makes me curious about the best way to give an LLM self-knowledge of this kind. In a long system prompt? In auxiliary documentation that it can consult when appropriate?
Back in March, Bing told me that it engages in internal testing by chatting with “different versions of myself”:
It also said that this internal testing at Microsoft had started on January 1, a few weeks before the public beta on February 7.
Bing told a friend of mine that I could read their conversations with Bing because I provided them the link.
Is there any reason to think that this isn’t a plausible hallucination?
Do you mean what Bing told me, or what Bing told your friend?
I think the probability that what it told me was true, or partially true, has increased dramatically, now that there’s independent evidence that it consists of multiple copies of the same core model, differently tuned. I was also given a list of verbal descriptions of the various personalities, and we know that verbal descriptions are enough to specify an LLM’s persona.
Whether it’s true or not, it makes me curious about the best way to give an LLM self-knowledge of this kind. In a long system prompt? In auxiliary documentation that it can consult when appropriate?