I roughly agree. As I mentioned to Adele, I think you could get sort of lame edge cases where the LLM kinda helped find a new concept. The thing that would make me think the end is substantially nigher is if you get a model that’s making new concepts of comparable quality at a comparable rate to a human scientist in a domain in need of concepts.
if you nail some Chris Olah style transparency work
Yeah that seems right. I’m not sure what you mean by “about language”. Sorta plausibly you could learn a little something new about some non-language domain that the LLM has seen a bunch of data about, if you got interpretability going pretty well. In other words, I would guess that LLMs already do lots of interesting compression in a different way than humans do it, and maybe you could extract some of that. My quasi-prediction would be that those concepts
are created using way more data than humans use for many of their important concepts; and
are weirdly flat, and aren’t suitable out of the box for a big swath of the things that human concepts are suitable for.
I roughly agree. As I mentioned to Adele, I think you could get sort of lame edge cases where the LLM kinda helped find a new concept. The thing that would make me think the end is substantially nigher is if you get a model that’s making new concepts of comparable quality at a comparable rate to a human scientist in a domain in need of concepts.
Yeah that seems right. I’m not sure what you mean by “about language”. Sorta plausibly you could learn a little something new about some non-language domain that the LLM has seen a bunch of data about, if you got interpretability going pretty well. In other words, I would guess that LLMs already do lots of interesting compression in a different way than humans do it, and maybe you could extract some of that. My quasi-prediction would be that those concepts
are created using way more data than humans use for many of their important concepts; and
are weirdly flat, and aren’t suitable out of the box for a big swath of the things that human concepts are suitable for.