The way out of such translation problems is usually natural language. In the logic case, both interlocutors presumably speak the same language, so they can both use the “logic of English” (of what follows from what in English natural language statements). Unlike some formal logic, the logic of English may be more vague, perhaps even probabilistic instead of deterministic, more evidence conferring than truth preserving, but these things are there for both interlocutors equally, so no translation is necessary, and things can be clarified by using statements about English statements made in English itself. If there is really a disagreement, it can be reduced to how some simple sentence in natural language should be interpreted in some hypothetical situation (thought experiment). These language intuitions are all over philosophy.
I’m not sure where the underlying disagreement in the decision theory case was (something about actions vs mixed strategies) but I assume there again the underlying problem can be expressed in natural language statements which both parties can understand without the need of translating them.
I disagree. For tricky technical topics, two different people will be speaking sufficiently different versions of English that this isn’t true. Vagueness and other such topics will not apply equally to both speakers; one person might have a precise understanding of decision-theoretic terms like “action” and “observation” while the other person may regard them as more vague, or may have different decision-theoretic understanding of those terms. Simple example, one person may regard Jeffrey-Bolker as the default framework for understanding agents, while the other may prefer Savage; these two frameworks ontologize actions in very different ways, which may be incidental to the debate or may be central. Speaking in English just obscures this underlying difference in how we think about things, rather than solving the problem.
I’m not sure where the underlying disagreement in the decision theory case was (something about actions vs mixed strategies) but I assume there again the underlying problem can be expressed in natural language statements which both parties can understand without the need of translating them.
In the case of mixed vs pure strategies, I think it is quite clear that translating to technical terminology rather than English helped clarify rather than obscure, even if it created the non-one-to-one translation problem this post is discussing.
The point of axiomatizing aspects of natural language reasoning, like decision theory, is to make them explicit, systematic, and easier to reason about. But the gold standard remain things that are valid in our antecedent natural language understanding. The primitive terms of any axiomatic theory are only meaningful insofar they reflect the meaning of some natural language terms, and the plausibility of the axioms derives from those natural language interpretations.
So for example, when we compare the axiomatizations of Savage and Jeffrey, we can do so by comparing how well or to what extent they capture the reasoning that is plausible in natural language. I would argue that Jeffrey’s theory is much more general, it captures parts of natural language reasoning that couldn’t be expressed in Savage’s earlier theory, while the opposite is arguably not the case. We can argue about that in English, e.g. by using terms like “action” with its natural language meaning and by discussing which theory captures it better. Savage assumes that outcomes are independent of “actions”, which is not presumed when doing practical reasoning expressed in natural language, and Jeffrey captures this correctly. One could object that Jeffrey allows us to assign probabilities to our own actions, which might be implausible, etc.
Even if I conceded this point, which is not obvious to me, I would still insist on the point that different speakers will be using natural language differently and so resorting to natural language rather than formal language is not universally a good move when it comes to clarifying disagreements.
Well, more importantly, I want to argue that “translation” is happening even if both people are apparently using English.
For example, philosophers have settled on distinct but related meanings for the terms “probability”, “credence”, “chance”, “frequency”, “belief”. (Some of these meanings are more vague/general while others are more precise; but more importantly, these different terms have many different detailed implications.) If two people are unfamiliar with all of those subtleties and they start using one of the words (say, “probability”), then it is very possible that they have two different ideas about which more-precise notion is being invoked.
When doing original research, people are often in this situation, because the several more-precise notions have not even been invented yet (so it’s not possible to go look up how philosophers have clarified the possible concepts).
In my experience, this means that two people using natural language to try and discuss a topic are very often in a situation where it feels like we’re “translating” back and forth between our two different ontologies, even though we’re both expressing those ideas in English.
So, even if both people express their ideas in English, I think the “non-invertible translation problem” discussed in the original post can still arise.
The way out of such translation problems is usually natural language. In the logic case, both interlocutors presumably speak the same language, so they can both use the “logic of English” (of what follows from what in English natural language statements). Unlike some formal logic, the logic of English may be more vague, perhaps even probabilistic instead of deterministic, more evidence conferring than truth preserving, but these things are there for both interlocutors equally, so no translation is necessary, and things can be clarified by using statements about English statements made in English itself. If there is really a disagreement, it can be reduced to how some simple sentence in natural language should be interpreted in some hypothetical situation (thought experiment). These language intuitions are all over philosophy.
I’m not sure where the underlying disagreement in the decision theory case was (something about actions vs mixed strategies) but I assume there again the underlying problem can be expressed in natural language statements which both parties can understand without the need of translating them.
I disagree. For tricky technical topics, two different people will be speaking sufficiently different versions of English that this isn’t true. Vagueness and other such topics will not apply equally to both speakers; one person might have a precise understanding of decision-theoretic terms like “action” and “observation” while the other person may regard them as more vague, or may have different decision-theoretic understanding of those terms. Simple example, one person may regard Jeffrey-Bolker as the default framework for understanding agents, while the other may prefer Savage; these two frameworks ontologize actions in very different ways, which may be incidental to the debate or may be central. Speaking in English just obscures this underlying difference in how we think about things, rather than solving the problem.
In the case of mixed vs pure strategies, I think it is quite clear that translating to technical terminology rather than English helped clarify rather than obscure, even if it created the non-one-to-one translation problem this post is discussing.
The point of axiomatizing aspects of natural language reasoning, like decision theory, is to make them explicit, systematic, and easier to reason about. But the gold standard remain things that are valid in our antecedent natural language understanding. The primitive terms of any axiomatic theory are only meaningful insofar they reflect the meaning of some natural language terms, and the plausibility of the axioms derives from those natural language interpretations.
So for example, when we compare the axiomatizations of Savage and Jeffrey, we can do so by comparing how well or to what extent they capture the reasoning that is plausible in natural language. I would argue that Jeffrey’s theory is much more general, it captures parts of natural language reasoning that couldn’t be expressed in Savage’s earlier theory, while the opposite is arguably not the case. We can argue about that in English, e.g. by using terms like “action” with its natural language meaning and by discussing which theory captures it better. Savage assumes that outcomes are independent of “actions”, which is not presumed when doing practical reasoning expressed in natural language, and Jeffrey captures this correctly. One could object that Jeffrey allows us to assign probabilities to our own actions, which might be implausible, etc.
Even if I conceded this point, which is not obvious to me, I would still insist on the point that different speakers will be using natural language differently and so resorting to natural language rather than formal language is not universally a good move when it comes to clarifying disagreements.
Well, more importantly, I want to argue that “translation” is happening even if both people are apparently using English.
For example, philosophers have settled on distinct but related meanings for the terms “probability”, “credence”, “chance”, “frequency”, “belief”. (Some of these meanings are more vague/general while others are more precise; but more importantly, these different terms have many different detailed implications.) If two people are unfamiliar with all of those subtleties and they start using one of the words (say, “probability”), then it is very possible that they have two different ideas about which more-precise notion is being invoked.
When doing original research, people are often in this situation, because the several more-precise notions have not even been invented yet (so it’s not possible to go look up how philosophers have clarified the possible concepts).
In my experience, this means that two people using natural language to try and discuss a topic are very often in a situation where it feels like we’re “translating” back and forth between our two different ontologies, even though we’re both expressing those ideas in English.
So, even if both people express their ideas in English, I think the “non-invertible translation problem” discussed in the original post can still arise.