Claude learns across different chats. What does this mean?
I was asking Claude 3 Sonnet “what is a PPU” in the context of this thread. For that purpose, I pasted part of the thread.
Claude automatically assumed that OA meant Anthropic (instead of OpenAI), which was surprising.
I opened a new chat, copying the exact same text, but with OA replaced by GDM. Even then, Claude assumed GDM meant Anthropic (instead of Google DeepMind).
This seemed like interesting behavior, so I started toying around (in new chats) with more tweaks to the prompt to check its robustness. But from then on Claude always correctly assumed OA was OpenAI, and GDM was Google DeepMind.
In fact, even when copying in a new chat the exact same original prompt (which elicited Claude to take OA to be Anthropic), the mistake no longer happened. Neither when I went for a lot of retries, nor tried the same thing in many different new chats.
Does this mean Claude somehow learns across different chats (inside the same user account)? If so, this might not happen through a process as naive as “append previous chats as the start of the prompt, with a certain indicator that they are different”, but instead some more effective distillation of the important information from those chats. Do we have any information on whether and how this happens?
(A different hypothesis is not that the later queries had access to the information from the previous ones, but rather that they were for some reason “more intelligent” and were able to catch up to the real meanings of OA and GDM, where the previous queries were not. This seems way less likely.)
I’ve checked for cross-chat memory explicitly (telling it to remember some information in one chat, and asking about it in the other), and it acts is if it doesn’t have it. Claude also explicitly states it doesn’t have cross-chat memory, when asked about it. Might something happen like “it does have some chat memory, but it’s told not to acknowledge this fact, but it sometimes slips”?
Probably more nuanced experiments are in order. Although note maybe this only happens for the chat webapp, and not different ways to access the API.
Claude learns across different chats. What does this mean?
I was asking Claude 3 Sonnet “what is a PPU” in the context of this thread. For that purpose, I pasted part of the thread.
Claude automatically assumed that OA meant Anthropic (instead of OpenAI), which was surprising.
I opened a new chat, copying the exact same text, but with OA replaced by GDM. Even then, Claude assumed GDM meant Anthropic (instead of Google DeepMind).
This seemed like interesting behavior, so I started toying around (in new chats) with more tweaks to the prompt to check its robustness. But from then on Claude always correctly assumed OA was OpenAI, and GDM was Google DeepMind.
In fact, even when copying in a new chat the exact same original prompt (which elicited Claude to take OA to be Anthropic), the mistake no longer happened. Neither when I went for a lot of retries, nor tried the same thing in many different new chats.
Does this mean Claude somehow learns across different chats (inside the same user account)?
If so, this might not happen through a process as naive as “append previous chats as the start of the prompt, with a certain indicator that they are different”, but instead some more effective distillation of the important information from those chats.
Do we have any information on whether and how this happens?
(A different hypothesis is not that the later queries had access to the information from the previous ones, but rather that they were for some reason “more intelligent” and were able to catch up to the real meanings of OA and GDM, where the previous queries were not. This seems way less likely.)
I’ve checked for cross-chat memory explicitly (telling it to remember some information in one chat, and asking about it in the other), and it acts is if it doesn’t have it.
Claude also explicitly states it doesn’t have cross-chat memory, when asked about it.
Might something happen like “it does have some chat memory, but it’s told not to acknowledge this fact, but it sometimes slips”?
Probably more nuanced experiments are in order. Although note maybe this only happens for the chat webapp, and not different ways to access the API.