I’ve got feeling that the implicit LessWrong’ish rationalist theory of truth is, in fact, some kind of epistemic (Bayesian) pragmatism, i.e. “true is that what is knowable using probability theory”. May also throw in ”..for a perfect computational agent”.
My speculation is that the declared LW’s sympathy towards the correspondence theory of truth stems from political / social reasons. We don’t want to be confused with the uncritically thinking masses—the apologists of homoeopathy or astrology justifying their views by “yeah, I don’t know how it works either, but it’s useful!”; the middle-school teachers who are ready to threat scientific results as epistemological equals of their favourite theories coming from folk-psychology, religious dogmas, or “common sense knowledge”, because, you known, “they all are true in some sense”. Pragmatic theories of truth are dangerous if they come into the wrong hands.
We don’t want to be confused with the uncritically thinking masses—the apologists of homoeopathy or astrology justifying their views by “yeah, I don’t know how it works either, but it’s useful!”;
I think this statement underscores the problem with rejecting the correspondence theory of truth. Yes, one can say “homeopathy works”, but what does that mean ? How do you evaluate whether any given model is useful of not ? If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard. All you’ve got left are your internal thoughts and feelings, and, as it turns out, certain goals (such as “eradicate polio” or “talk to people very far away”) cannot be achieved based on your feelings alone.
How do you evaluate whether any given model is useful of not?
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard.
Solomonoff induction provides a universal standard for “perfect” inductive inference, that is, learning from observations. It is not entirely parameter-free, so it’s “a standard”, not “the standard”. I doubt if there is the standard for the same reasons I doubt that Platonic Truth does exist.
All you’ve got left are your internal thoughts and feelings
Umm, no, this is a false dichotomy. There is a large area in between “relying on one’s intuition” and “relying on an objective external word”. For example, how about “relying on the accumulated knowledge of others”?
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
Right, but I meant, in practice.
that is, learning from observations.
Observations of what ? Since you do not have access to infinite computation or perfect observations in practice, you end up observing the outputs of models, as suggested in the original post.
For example, how about “relying on the accumulated knowledge of others”?
What is it that makes their accumulated knowledge worthy of being relied upon ?
you end up observing the outputs of models, as suggested in the original post.
I agree with pragmatist (the OP) that this is a problem for the correspondence theory of truth.
What is it that makes their accumulated knowledge worthy of being relied upon ?
Usefulness? Just don’t say “experimental evidence”. Don’t oversimplify epistemic justification. There are many aspects—how well knowledge fits with existing models, with observations, what is it’s predictive power, what is it’s instrumental value (does it help to achieve one’s goals) etc. For example, we don’t have any experimental evidence that smoking causes cancer in humans, but we nevertheless believe that is does. The power of Bayesian approach is in the mechanism to fuse together all these different forms of evidence and to arrive at a single posterior probability.
I’ve got feeling that the implicit LessWrong’ish rationalist theory of truth is, in fact, some kind of epistemic (Bayesian) pragmatism, i.e. “true is that what is knowable using probability theory”. May also throw in ”..for a perfect computational agent”.
My speculation is that the declared LW’s sympathy towards the correspondence theory of truth stems from political / social reasons. We don’t want to be confused with the uncritically thinking masses—the apologists of homoeopathy or astrology justifying their views by “yeah, I don’t know how it works either, but it’s useful!”; the middle-school teachers who are ready to threat scientific results as epistemological equals of their favourite theories coming from folk-psychology, religious dogmas, or “common sense knowledge”, because, you known, “they all are true in some sense”. Pragmatic theories of truth are dangerous if they come into the wrong hands.
I think this statement underscores the problem with rejecting the correspondence theory of truth. Yes, one can say “homeopathy works”, but what does that mean ? How do you evaluate whether any given model is useful of not ? If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard. All you’ve got left are your internal thoughts and feelings, and, as it turns out, certain goals (such as “eradicate polio” or “talk to people very far away”) cannot be achieved based on your feelings alone.
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
Solomonoff induction provides a universal standard for “perfect” inductive inference, that is, learning from observations. It is not entirely parameter-free, so it’s “a standard”, not “the standard”. I doubt if there is the standard for the same reasons I doubt that Platonic Truth does exist.
Umm, no, this is a false dichotomy. There is a large area in between “relying on one’s intuition” and “relying on an objective external word”. For example, how about “relying on the accumulated knowledge of others”?
See also my comment in the other thread.
Right, but I meant, in practice.
Observations of what ? Since you do not have access to infinite computation or perfect observations in practice, you end up observing the outputs of models, as suggested in the original post.
What is it that makes their accumulated knowledge worthy of being relied upon ?
I agree with pragmatist (the OP) that this is a problem for the correspondence theory of truth.
Usefulness? Just don’t say “experimental evidence”. Don’t oversimplify epistemic justification. There are many aspects—how well knowledge fits with existing models, with observations, what is it’s predictive power, what is it’s instrumental value (does it help to achieve one’s goals) etc. For example, we don’t have any experimental evidence that smoking causes cancer in humans, but we nevertheless believe that is does. The power of Bayesian approach is in the mechanism to fuse together all these different forms of evidence and to arrive at a single posterior probability.