In terms of LLM architecture, do transformer-based LLMs have the ability to invent new, genuinely useful concepts?
So I’m not sure how well the word “invent” fits here, but I think it’s safe to say LLMs have concepts that we do not.
So I’m not sure how well the word “invent” fits here, but I think it’s safe to say LLMs have concepts that we do not.