I’ve looked at all those articles and read the abstracts, but not the articles themselves. It’s not obvious to me that they provide strong evidence against my view, which is more subtle than what you may have inferred from my post.
I fully accept that words can give meaning to, can even by defined by, other words. That’s the point of the distinction between relationality and adhesion as aspects of semanticity. The importance of the relational aspect of semanticity is central to my thinking, and is something I argue in some detail in GPT-3: Waterloo or Rubicon? Here be Dragons. At the same time I also believe that there is a significant set of words whose semanticity derives exclusively, even predominantly (I’ve not thought it through recently) through their adhesion to the physical world.
Without access to those adhesions the whole linguistic edifice is cut off from the world. Its relationality is fully intact and functioning. That’s what LLMs are running on. That they do so well on that basis is remarkable. But it’s not everything.
If you want to pursue this farther, I’ll make you a deal. I’ll read three of those articles if you read two of mine.
I’m particularly interested in the last two, convergence of language and vision, and language processing in humans and LLMs. You pick the third. For my two, read the Dragons piece I’ve linked to and an (old) article by David Hays, On “Alienation”: An Essay in the Psycholinguistics of Science.
Some relevant literature: Language is more abstract than you think, or, why aren’t languages more iconic?, Meaning without reference in large language models, Grounding the Vector Space of an Octopus: Word Meaning from Raw Text, Understanding models understanding language, Implications of the Convergence of Language and Vision Model Geometries, Shared computational principles for language processing in humans and deep language models.
I’ve looked at all those articles and read the abstracts, but not the articles themselves. It’s not obvious to me that they provide strong evidence against my view, which is more subtle than what you may have inferred from my post.
I fully accept that words can give meaning to, can even by defined by, other words. That’s the point of the distinction between relationality and adhesion as aspects of semanticity. The importance of the relational aspect of semanticity is central to my thinking, and is something I argue in some detail in GPT-3: Waterloo or Rubicon? Here be Dragons. At the same time I also believe that there is a significant set of words whose semanticity derives exclusively, even predominantly (I’ve not thought it through recently) through their adhesion to the physical world.
Without access to those adhesions the whole linguistic edifice is cut off from the world. Its relationality is fully intact and functioning. That’s what LLMs are running on. That they do so well on that basis is remarkable. But it’s not everything.
If you want to pursue this farther, I’ll make you a deal. I’ll read three of those articles if you read two of mine.
I’m particularly interested in the last two, convergence of language and vision, and language processing in humans and LLMs. You pick the third. For my two, read the Dragons piece I’ve linked to and an (old) article by David Hays, On “Alienation”: An Essay in the Psycholinguistics of Science.