Hello. I am a lurker, but I checked the search and didn’t see anyone discussing Wittgenstein’s ideas concerning the “essence” of language and his talk of “Language Games” so I thought I’d ask.
Wittgenstein is a linguistic philosopher, who in very brief terms clarified our usage of language. While many people conceived of language as clear distinct and obvious, Wittgenstein used the example of the word “game” to show how there is no consistent and encompassing definition for plenty of words we regularly use. Among other things, he observed that language rather exists as a web of connotations that depend and change with context, and that this connotation can only truly be understood when observing the use of language, rather than some detached definition.
“Wittgenstein In Philosophical Context:
Essential Definition—Socrates, boil it down to its essence
Extensive Definition—Wittgenstein, use it in a sentence”
The above dichotomy frames the use of “words” by philosophers in a contradictory manner. And perhaps in qualia we do conceive them as different sorts of definitions, but imo it’s just a matter of framing and we can readily say that “How you use a word in a sentence is itself the essence of a word”. And intend the Ai therefore to conceive of the “essence” of words accordingly.
Descriptively speaking, Wittgenstein has always appeared unambiguously correct on this matter to me.
This all being said, I am wondering something relating to Wittgenstein:
Does AI safety, and AI engineers in general, have a similar conception of language? When CGPT reads a sentence, does it intentionally treat each word ’s essence as some rigid unchanging thing derived from some dictionary definition, or as a web of connotation to other words? This might seem rather trivial, but when interpreting a prompt like “save my life” it seems clear why truly understanding each word’s meaning is so important for potential AGI. So then, is Wittgenstein or rather this conception of language taken seriously and intentionally consciously implemented? Is there even an intention of ensuring that Ai truly consciously understands language? It seems like this is a prerequisite to actually ensuring any AGI we build is 100% aligned. If the language we use to communicate with the AGI is up to interpretation it seems alignment is simply obviously impossible.
Is Wittgenstein’s Language Game used when helping Ai understand language?
Hello. I am a lurker, but I checked the search and didn’t see anyone discussing Wittgenstein’s ideas concerning the “essence” of language and his talk of “Language Games” so I thought I’d ask.
Wittgenstein is a linguistic philosopher, who in very brief terms clarified our usage of language. While many people conceived of language as clear distinct and obvious, Wittgenstein used the example of the word “game” to show how there is no consistent and encompassing definition for plenty of words we regularly use. Among other things, he observed that language rather exists as a web of connotations that depend and change with context, and that this connotation can only truly be understood when observing the use of language, rather than some detached definition.
“Wittgenstein In Philosophical Context:
Essential Definition—Socrates, boil it down to its essence
Extensive Definition—Wittgenstein, use it in a sentence”
The above dichotomy frames the use of “words” by philosophers in a contradictory manner. And perhaps in qualia we do conceive them as different sorts of definitions, but imo it’s just a matter of framing and we can readily say that “How you use a word in a sentence is itself the essence of a word”. And intend the Ai therefore to conceive of the “essence” of words accordingly.
Descriptively speaking, Wittgenstein has always appeared unambiguously correct on this matter to me.
This all being said, I am wondering something relating to Wittgenstein:
Does AI safety, and AI engineers in general, have a similar conception of language? When CGPT reads a sentence, does it intentionally treat each word ’s essence as some rigid unchanging thing derived from some dictionary definition, or as a web of connotation to other words? This might seem rather trivial, but when interpreting a prompt like “save my life” it seems clear why truly understanding each word’s meaning is so important for potential AGI. So then, is Wittgenstein or rather this conception of language taken seriously and intentionally consciously implemented? Is there even an intention of ensuring that Ai truly consciously understands language? It seems like this is a prerequisite to actually ensuring any AGI we build is 100% aligned. If the language we use to communicate with the AGI is up to interpretation it seems alignment is simply obviously impossible.