Whatever one means by “memorize” is by no means self-evident. If you prompt ChatGPT with “To be, or not to be,” it will return the whole soliloquy. Sometimes. Other times it will give you an opening chunk and then an explanation that that’s the well known soliloquy, etc. By poking around I discovered that I could elicit the soliloquy by giving it prompts that consisting of syntactically coherent phrases, but if I gave it prompts that were not syntactically coherent, it didn’t recognize the source, that is, until a bit more prompting. I’ve never found the idea that LLMs were just memorizing to be very plausible.
Whatever one means by “memorize” is by no means self-evident. If you prompt ChatGPT with “To be, or not to be,” it will return the whole soliloquy. Sometimes. Other times it will give you an opening chunk and then an explanation that that’s the well known soliloquy, etc. By poking around I discovered that I could elicit the soliloquy by giving it prompts that consisting of syntactically coherent phrases, but if I gave it prompts that were not syntactically coherent, it didn’t recognize the source, that is, until a bit more prompting. I’ve never found the idea that LLMs were just memorizing to be very plausible.
In any event, here’s a bunch of experiments explicitly aimed at memorizing, including the Hamlet soliloquy stuff: https://www.academia.edu/107318793/Discursive_Competence_in_ChatGPT_Part_2_Memory_for_Texts_Version_3