So, the general answer to “Is there anyone who doesn’t know this?” is, in fact, “Yes.” But I can try to say a little bit more about why I thought this was worth writing.
I do think Less Wrong and /r/rational readers know that words don’t have intrinsic definitions. If someone wrote a story that just made the point, “Hey, words don’t have intrinsic definitions!”, I would probably downvote it.
But I think this piece is actually doing more work and exposing more details than that—I’m actually providing executable source code (!) that sketches how a simple sender–reciever game with a reinforcement-learning rule correlates a not-intrinsically-meaningful signal with the environment such that it can be construed as a meaningful word that could have a definition.
In analogy, explaining how the subjective sensation of “free will” might arise from a deterministic system that computes plans (without being able to predict what it will choose in advance of having computed it) is doing more work than the mere observation “Naïve free will can’t exist because physics is deterministic”.
So, I don’t think all this was already obvious to Less Wrong readers. If it was already obvious to you, then you should be commended. However, even if some form of these ideas was already well-known, I’m also a proponent of “writing a thousand roads to Rome”: part of how you get and maintain a community where “everybody knows” certain basic material, is by many authors grappling the ideas and putting their own ever-so-slightly-different pedagogical spin on them. It’s fundamentally okay for Yudkowsky’s account of free will, and Gary Drescher’s account (in Chapter 5 of Good and Real), andmy story about writing a chess engine to all exist, even if they’re all basically “pointing at the same thing.”
Another possible motivation for writing a new presentation of an already well-known idea, is because the new presentation might be better-suited as a prerequisite or “building block” towards more novel work in the future. In this case, some recent Less Wrong discussions have used a “four simulacrum levels” framework (loosely inspired by the work of Jean Baudrillard) to try to model how political forces alter the meaning of language, but I’m pretty unhappy with the “four levels” formulation: the fact that I could never remember the difference between “level 3″ and “level 4” even after it was explained several times (Zvi’s latest post helped a little), and the contrast between the “linear progression” and “2x2”formulations, make me feel like we’re talking about a hodgepodge of different things and haphazardly shoving them into this “four levels” framework, rather than having a clean deconfused concept to do serious thinking with. I’m optimistic about a formal analysis of sender–receiver games (following the work of Skyrms and others) being able to provide this. Now, I haven’t done that work yet, and maybe I won’t find anything interesting, but laying out the foundations for that potential future work was part of my motivation for this piece.
Fair enough—it’s probably good to have it in writing. But this seems to me like the sort of explanation that is “the only possible way it could conceivably work.” How could we bootstrap language learning if not for our existing, probably-inherent faculty for correlating classifiers over the the environment? Once you say “I want to teach something the meaning of a word, but the only means I have to transmit information to them is present them with situations and have them make inferences”… there almost isn’t anything to add to this. The question already seems to contain the only possible answer.
Maybe you need to have read Through the Looking Glass?
Thanks for the comment!—and for your patience.
So, the general answer to “Is there anyone who doesn’t know this?” is, in fact, “Yes.” But I can try to say a little bit more about why I thought this was worth writing.
I do think Less Wrong and /r/rational readers know that words don’t have intrinsic definitions. If someone wrote a story that just made the point, “Hey, words don’t have intrinsic definitions!”, I would probably downvote it.
But I think this piece is actually doing more work and exposing more details than that—I’m actually providing executable source code (!) that sketches how a simple sender–reciever game with a reinforcement-learning rule correlates a not-intrinsically-meaningful signal with the environment such that it can be construed as a meaningful word that could have a definition.
In analogy, explaining how the subjective sensation of “free will” might arise from a deterministic system that computes plans (without being able to predict what it will choose in advance of having computed it) is doing more work than the mere observation “Naïve free will can’t exist because physics is deterministic”.
So, I don’t think all this was already obvious to Less Wrong readers. If it was already obvious to you, then you should be commended. However, even if some form of these ideas was already well-known, I’m also a proponent of “writing a thousand roads to Rome”: part of how you get and maintain a community where “everybody knows” certain basic material, is by many authors grappling the ideas and putting their own ever-so-slightly-different pedagogical spin on them. It’s fundamentally okay for Yudkowsky’s account of free will, and Gary Drescher’s account (in Chapter 5 of Good and Real), and my story about writing a chess engine to all exist, even if they’re all basically “pointing at the same thing.”
Another possible motivation for writing a new presentation of an already well-known idea, is because the new presentation might be better-suited as a prerequisite or “building block” towards more novel work in the future. In this case, some recent Less Wrong discussions have used a “four simulacrum levels” framework (loosely inspired by the work of Jean Baudrillard) to try to model how political forces alter the meaning of language, but I’m pretty unhappy with the “four levels” formulation: the fact that I could never remember the difference between “level 3″ and “level 4” even after it was explained several times (Zvi’s latest post helped a little), and the contrast between the “linear progression” and “2x2” formulations, make me feel like we’re talking about a hodgepodge of different things and haphazardly shoving them into this “four levels” framework, rather than having a clean deconfused concept to do serious thinking with. I’m optimistic about a formal analysis of sender–receiver games (following the work of Skyrms and others) being able to provide this. Now, I haven’t done that work yet, and maybe I won’t find anything interesting, but laying out the foundations for that potential future work was part of my motivation for this piece.
Fair enough—it’s probably good to have it in writing. But this seems to me like the sort of explanation that is “the only possible way it could conceivably work.” How could we bootstrap language learning if not for our existing, probably-inherent faculty for correlating classifiers over the the environment? Once you say “I want to teach something the meaning of a word, but the only means I have to transmit information to them is present them with situations and have them make inferences”… there almost isn’t anything to add to this. The question already seems to contain the only possible answer.
Maybe you need to have read Through the Looking Glass?