I would like to propose calling them “language models” and not “LLMs”. ~Nobody is studying small language models any longer, so the “large” word is redundant. And acronyms are disorienting to folks who don’t know them. So I propose using the term “language model” instead.
But it’s not just language any longer either, with image inputs, etc… all else equal I’d prefer a name that emphasized how little we understand how they work (“model” seems to me to connote the opposite), but I don’t have any great suggestions.
I would like to propose calling them “language models” and not “LLMs”. ~Nobody is studying small language models any longer, so the “large” word is redundant. And acronyms are disorienting to folks who don’t know them. So I propose using the term “language model” instead.
Tristan Harris called them “generative large language multimodal models (GLLMMs)”. So “golems”. Gollums?
My first guess is that I would prefer just calling them “Multimodals”. Or perhaps “Image/Text Multimodals”.
But it’s not just language any longer either, with image inputs, etc… all else equal I’d prefer a name that emphasized how little we understand how they work (“model” seems to me to connote the opposite), but I don’t have any great suggestions.