I am curious; what is the general LessWrong philosophy about what truth “is”? Personally I so far lean towards accepting an operational subjective Bayesian definition, i.e. the truth of a statement is defined only so far as we agree on some (in principle) operational procedure for determining its truth; that is we have to agree on what observations make it true or false.
For example “it will rain in Melbourne tomorrow” is true if we see it raining in Melbourne tomorrow (trivial, but also means that the truth of the statement doesn’t depend on rain being “real”, or just a construction of Descartes’ evil demon or the matrix, or a dream, or even a hallucination). It is also a bit disturbing because the truth of “the local speed of light is a constant in all reference frames” can never be determined in such a way. We could go to something like Popper’s truthlikeness, but then standard Bayesianism gets very confusing, since we then have to worry about the probability that a statement has a certain level of “truthlikeness”, which is a little mysterious. Truthlikeness is nice in how it relates to the map-territory analogy though.
I am inclined to think that standard Bayesian style statements about operationally-defined things based on our “maps” makes sense, i.e. “If I go and measure how long it takes light to travel from the Earth to Mars, the result will be proportional to c” (with this being influenced by the abstraction that is general relativity), but it still remains unclear to me precisely what this means, in terms of Bayes theorem that is: i.e. the probability P(“measure c” | “general relativity”) implies that P(“general relativity”) makes sense somehow, though the operational criteria cannot be where its meaning comes from. In addition we must somehow account for that fact “general relativity” is strictly False, in the “all models are wrong” sense, so we need to somehow rejig that proposition into something that might actually be true, since it makes no sense to condition our beliefs on things we know to be false.
I suppose we might be able to imagine some kind of super-representation theorem, in the style of de-Finetti, in which we show that degrees of belief in operational statements can be represented as the model average of the predictions from all computable theories, hoping to provide an operational basis for Solomonoff induction, but actually I am still not 100% sure what de-Finetti’s usual representation theorem really means. We can behave “as if” we had degrees of belief in these models weighted by some prior? Huh? Does this mean we don’t really have such degrees of belief in models but they are a convenient fiction? I am very unclear on the interpretation here.
The map-territory analogy does seem correct to me, but I find it hard to reconstruct ordinary Bayesian-style statements via this kind of thinking...
Hmm, thanks. Seems similar to my description above, though as far as I can tell it doesn’t deal with my criticisms. It is rather evasive when it comes to the question of what status models have in Bayesian calculations.
I am curious; what is the general LessWrong philosophy about what truth “is”? Personally I so far lean towards accepting an operational subjective Bayesian definition, i.e. the truth of a statement is defined only so far as we agree on some (in principle) operational procedure for determining its truth; that is we have to agree on what observations make it true or false.
For example “it will rain in Melbourne tomorrow” is true if we see it raining in Melbourne tomorrow (trivial, but also means that the truth of the statement doesn’t depend on rain being “real”, or just a construction of Descartes’ evil demon or the matrix, or a dream, or even a hallucination). It is also a bit disturbing because the truth of “the local speed of light is a constant in all reference frames” can never be determined in such a way. We could go to something like Popper’s truthlikeness, but then standard Bayesianism gets very confusing, since we then have to worry about the probability that a statement has a certain level of “truthlikeness”, which is a little mysterious. Truthlikeness is nice in how it relates to the map-territory analogy though.
I am inclined to think that standard Bayesian style statements about operationally-defined things based on our “maps” makes sense, i.e. “If I go and measure how long it takes light to travel from the Earth to Mars, the result will be proportional to c” (with this being influenced by the abstraction that is general relativity), but it still remains unclear to me precisely what this means, in terms of Bayes theorem that is: i.e. the probability P(“measure c” | “general relativity”) implies that P(“general relativity”) makes sense somehow, though the operational criteria cannot be where its meaning comes from. In addition we must somehow account for that fact “general relativity” is strictly False, in the “all models are wrong” sense, so we need to somehow rejig that proposition into something that might actually be true, since it makes no sense to condition our beliefs on things we know to be false.
I suppose we might be able to imagine some kind of super-representation theorem, in the style of de-Finetti, in which we show that degrees of belief in operational statements can be represented as the model average of the predictions from all computable theories, hoping to provide an operational basis for Solomonoff induction, but actually I am still not 100% sure what de-Finetti’s usual representation theorem really means. We can behave “as if” we had degrees of belief in these models weighted by some prior? Huh? Does this mean we don’t really have such degrees of belief in models but they are a convenient fiction? I am very unclear on the interpretation here.
The map-territory analogy does seem correct to me, but I find it hard to reconstruct ordinary Bayesian-style statements via this kind of thinking...
To the extend that there a general philosophy it’s http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/ but individual people might differ slightly.
Hmm, thanks. Seems similar to my description above, though as far as I can tell it doesn’t deal with my criticisms. It is rather evasive when it comes to the question of what status models have in Bayesian calculations.