“You think you have done ok, but word meanings are a giant tragedy of the commons. You might have done untold damage. We know that interesting concepts are endlessly watered down by exaggerators and attention seekers choosing incrementally wider categories at every ambiguity. That kind of thing might be going on all over the place. Maybe we just don’t know what words could be, if we were trying to do them well, instead of everyone being out to advance their own utterings.”
“You know, you’re speaking as if I’m contributing to the tragedy of the commons, while you are the one who is avoiding it. But you’re the one who doesn’t think word-meaning is serious enough to elevate beyond an arbitrary choice, whereas I was the one concerned with the real meaning of words. Doesn’t your casual stance invite the greater risk of tragedy? Isn’t my attempt to cooperate with a larger group the sort of thing which avoids tragedy?”
“I’m far from indifferent, or casual! Denying that there is one correct definition of a word does not make language arbitrary, or unimportant.”
“Yes, I get that… and since I didn’t explicitly say it before: I concede that there is no fundamental reason we have to stick to common usage, and furthermore, if you’re trying to figure out what common usage is in order to decide whether to agree with some point in a discussion, you’re probably going down a wrong track. But, look. That doesn’t mean you’re allowed to make a word mean anything you want.”
“I literally am. There are no word police.”
″… yeah … but, look. According to my schoolbooks, at least, biologists define ‘life’ in a way which excludes viruses, right? Because they don’t have ‘cells’, and there’s some doctrine about life consisting of cells. And that’s crazy, right? All the big, important intuitions about biology apply to viruses. They’re clearly a form of life, because they reproduce and evolve, just like life. If you’re going to go around with a narrow concept of ‘life’ which excludes viruses, you are missing something. You’re not just going to be using language in a way I find disagreeable. Your mental heuristics are going to reach poorer conclusions, because you don’t apply them broadly enough. Unless you have some secondary concept, ‘pseudo-life’, which plays the role in your ontology which ‘life’ plays in mine. In which case it is just a translation issue.”
“A virus doesn’t have any metabolism, though. That’s pretty important to a lot of biology!”
″… Fine, but that still plays to my point that definitions are important, and can be wrong!”
“Hm. I think we both agree that definitions can be good and bad. But, what would make one wrong?”
“It’s the same thing that makes anything wrong. Bad definitions lead to low predictive accuracy. If you use worse definitions, you’re going to tend to lose bets against people who use better definitions, all else being equal.”
“Hmm. I’m pretty on board with the Bayesian thing, but this seems somehow different. I have an intuition that which definitions you use shouldn’t matter, at all, to how you predict.”
“That seems patently false in practice.”
“Sure, but… the bayesian ideal of rationality is an agent with unlimited processing power. It can trivially translate things from one definition to another. The words are just a tool it uses to describe its beliefs. Hence, definitions may influence efficiency of communication, but they shouldn’t influence the quality of the beliefs themselves.”
“I think I see the problem here. You’re imagining just speaking with the definitions. I’m imagining thinking in those terms. I think we’d be on the same page, if I thought speaking were the only concern. In any conversation, the ideal is to communicate efficiently and accurately, in the language as it’s understood by the listeners. There’s a question of who/how to adjust when participants in a conversation have differing definitions, or don’t know whether they have the same ones. But setting that aside, ”
“Sure, I think that’s what I’ve been trying to say!”
“But there’s another way to think about language. As you know, prediction is compression in orthodox Bayesianism. So a belief distribution can be thought of as a language, and vice versa—we can translate between the two, using coding schemes. So, in that sense, our internal language just is our beliefs, and by definition has to do with how we make predictions.”
“Sure, ok, but that still doesn’t need to have much to do with how we talk about things; we can use different languages internally and externally. Like you said, the ideal is to translate our thoughts into the language our listeners can best understand.”
“Yes, BUT, that’s a more cool and collected way of relating to others—dare I say cold and isolated, as per my earlier line of thinking. It’s a bit like turning in a math assignment without any shown work. You can’t bare your soul to everyone, but among trusted friends, you want to talk about how you actually think, pre-translation, because if you do that, you might actually stand a chance of improving how you think.”
“I don’t think we can literally convey how we think—it would be a big mess of neural activations. We’re doomed to speak in translation.”
“Ok, point conceded. But there are degrees. I guess what I’m trying to say is that it seems important to the workings of my internal ontology that ‘toasters’ just aren’t something that can be labelled as ‘stupid’; it’s a confused notion...”
“Hm, well, I feel it’s the reverse, there’s something wrong with not being able to label toasters that way...”
“You know, you’re speaking as if I’m contributing to the tragedy of the commons, while you are the one who is avoiding it. But you’re the one who doesn’t think word-meaning is serious enough to elevate beyond an arbitrary choice, whereas I was the one concerned with the real meaning of words. Doesn’t your casual stance invite the greater risk of tragedy? Isn’t my attempt to cooperate with a larger group the sort of thing which avoids tragedy?”
“I’m far from indifferent, or casual! Denying that there is one correct definition of a word does not make language arbitrary, or unimportant.”
“Yes, I get that… and since I didn’t explicitly say it before: I concede that there is no fundamental reason we have to stick to common usage, and furthermore, if you’re trying to figure out what common usage is in order to decide whether to agree with some point in a discussion, you’re probably going down a wrong track. But, look. That doesn’t mean you’re allowed to make a word mean anything you want.”
“I literally am. There are no word police.”
″… yeah … but, look. According to my schoolbooks, at least, biologists define ‘life’ in a way which excludes viruses, right? Because they don’t have ‘cells’, and there’s some doctrine about life consisting of cells. And that’s crazy, right? All the big, important intuitions about biology apply to viruses. They’re clearly a form of life, because they reproduce and evolve, just like life. If you’re going to go around with a narrow concept of ‘life’ which excludes viruses, you are missing something. You’re not just going to be using language in a way I find disagreeable. Your mental heuristics are going to reach poorer conclusions, because you don’t apply them broadly enough. Unless you have some secondary concept, ‘pseudo-life’, which plays the role in your ontology which ‘life’ plays in mine. In which case it is just a translation issue.”
“A virus doesn’t have any metabolism, though. That’s pretty important to a lot of biology!”
″… Fine, but that still plays to my point that definitions are important, and can be wrong!”
“Hm. I think we both agree that definitions can be good and bad. But, what would make one wrong?”
“It’s the same thing that makes anything wrong. Bad definitions lead to low predictive accuracy. If you use worse definitions, you’re going to tend to lose bets against people who use better definitions, all else being equal.”
“Hmm. I’m pretty on board with the Bayesian thing, but this seems somehow different. I have an intuition that which definitions you use shouldn’t matter, at all, to how you predict.”
“That seems patently false in practice.”
“Sure, but… the bayesian ideal of rationality is an agent with unlimited processing power. It can trivially translate things from one definition to another. The words are just a tool it uses to describe its beliefs. Hence, definitions may influence efficiency of communication, but they shouldn’t influence the quality of the beliefs themselves.”
“I think I see the problem here. You’re imagining just speaking with the definitions. I’m imagining thinking in those terms. I think we’d be on the same page, if I thought speaking were the only concern. In any conversation, the ideal is to communicate efficiently and accurately, in the language as it’s understood by the listeners. There’s a question of who/how to adjust when participants in a conversation have differing definitions, or don’t know whether they have the same ones. But setting that aside, ”
“Sure, I think that’s what I’ve been trying to say!”
“But there’s another way to think about language. As you know, prediction is compression in orthodox Bayesianism. So a belief distribution can be thought of as a language, and vice versa—we can translate between the two, using coding schemes. So, in that sense, our internal language just is our beliefs, and by definition has to do with how we make predictions.”
“Sure, ok, but that still doesn’t need to have much to do with how we talk about things; we can use different languages internally and externally. Like you said, the ideal is to translate our thoughts into the language our listeners can best understand.”
“Yes, BUT, that’s a more cool and collected way of relating to others—dare I say cold and isolated, as per my earlier line of thinking. It’s a bit like turning in a math assignment without any shown work. You can’t bare your soul to everyone, but among trusted friends, you want to talk about how you actually think, pre-translation, because if you do that, you might actually stand a chance of improving how you think.”
“I don’t think we can literally convey how we think—it would be a big mess of neural activations. We’re doomed to speak in translation.”
“Ok, point conceded. But there are degrees. I guess what I’m trying to say is that it seems important to the workings of my internal ontology that ‘toasters’ just aren’t something that can be labelled as ‘stupid’; it’s a confused notion...”
“Hm, well, I feel it’s the reverse, there’s something wrong with not being able to label toasters that way...”