As Richard noted, meaning is context-dependent. When I say “is there water in the fridge?” I am not merely referring to h2o; I am referring to something like a container of relatively pure water in easily drinkable form.
Then why not consider structure as follows?
you are searching for “something like a container of relatively pure water in easily drinkable form”—or, rather, “[your subconscious-native code] of water-like thing + for drinking”,
you emit sequence of tokens (sounds/characters) “is there water in the fridge?”, approximating previous idea (discarding your intent to drink it as it might be inferred from context, omitting that you can drink something close to water),
conversation partner hears “is there water in the fridge?”, converted into thought “you asked ‘is there water in the fridge?’”,
and interprets words as “you need something like a container of relatively pure water in easily drinkable form”—or, rather, “[their subconscious-native code] for another person, a water-like thing + for drinking”.
That messes up with “meanings of sentences” but is necessary to rationally process filtered evidence.
Each statement that the clever arguer makes is valid evidence—how could you not update your probabilities? Has it ceased to be true that, in such-and-such a proportion of Everett branches or Tegmark duplicates in which box B has a blue stamp, box B contains a diamond? According to Jaynes, a Bayesian must always condition on all known evidence, on pain of paradox. But then the clever arguer can make you believe anything they choose, if there is a sufficient variety of signs to selectively report.
It seems to me that there is a really interesting interplay of different forces here, which we don’t yet know how to model well.
Even if Alice tries meticulously to only say literally true things, and be precise about her meanings, Bob can and should infer more than what Alice has literally said, by working backwards to infer why she has said it rather than something else.
So, pragmatics is inevitable, and we’d be fools not to take advantage of it.
However, we also really like transparent contexts—that is, we like to be able to substitute phrases for equivalent phrases (equational reasoning, like algebra), and make inferences based on substitution-based reasoning (if all bachelors are single, and Jerry is a bachelor, then Jerry is single).
To put it simply, things are easier when words have context-independent meanings (or more realistically, meanings which are valid across a wide array of contexts, although nothing will be totally context-independent).
This puts contradictory pressure on language. Pragmatics puts pressure towards highly context-dependent meaning; reasoning puts pressure towards highly context-independent meaning.
If someone argues a point by conflation (uses a word in two different senses, but makes an inference as if the word had one sense) then we tend to fault using the same word in two different senses, rather than fault basic reasoning patterns like transitivity of implication (A implies B, and B implies C, so A implies C). Why is that? Is that the correct choice? If meanings are inevitably context-dependent anyway, why not give up on reasoning? ;p
Then why not consider structure as follows?
you are searching for “something like a container of relatively pure water in easily drinkable form”—or, rather, “[your subconscious-native code] of water-like thing + for drinking”,
you emit sequence of tokens (sounds/characters) “is there water in the fridge?”, approximating previous idea (discarding your intent to drink it as it might be inferred from context, omitting that you can drink something close to water),
conversation partner hears “is there water in the fridge?”, converted into thought “you asked ‘is there water in the fridge?’”,
and interprets words as “you need something like a container of relatively pure water in easily drinkable form”—or, rather, “[their subconscious-native code] for another person, a water-like thing + for drinking”.
That messes up with “meanings of sentences” but is necessary to rationally process filtered evidence.
It seems to me that there is a really interesting interplay of different forces here, which we don’t yet know how to model well.
Even if Alice tries meticulously to only say literally true things, and be precise about her meanings, Bob can and should infer more than what Alice has literally said, by working backwards to infer why she has said it rather than something else.
So, pragmatics is inevitable, and we’d be fools not to take advantage of it.
However, we also really like transparent contexts—that is, we like to be able to substitute phrases for equivalent phrases (equational reasoning, like algebra), and make inferences based on substitution-based reasoning (if all bachelors are single, and Jerry is a bachelor, then Jerry is single).
To put it simply, things are easier when words have context-independent meanings (or more realistically, meanings which are valid across a wide array of contexts, although nothing will be totally context-independent).
This puts contradictory pressure on language. Pragmatics puts pressure towards highly context-dependent meaning; reasoning puts pressure towards highly context-independent meaning.
If someone argues a point by conflation (uses a word in two different senses, but makes an inference as if the word had one sense) then we tend to fault using the same word in two different senses, rather than fault basic reasoning patterns like transitivity of implication (A implies B, and B implies C, so A implies C). Why is that? Is that the correct choice? If meanings are inevitably context-dependent anyway, why not give up on reasoning? ;p