How do you evaluate whether any given model is useful of not?
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard.
Solomonoff induction provides a universal standard for “perfect” inductive inference, that is, learning from observations. It is not entirely parameter-free, so it’s “a standard”, not “the standard”. I doubt if there is the standard for the same reasons I doubt that Platonic Truth does exist.
All you’ve got left are your internal thoughts and feelings
Umm, no, this is a false dichotomy. There is a large area in between “relying on one’s intuition” and “relying on an objective external word”. For example, how about “relying on the accumulated knowledge of others”?
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
Right, but I meant, in practice.
that is, learning from observations.
Observations of what ? Since you do not have access to infinite computation or perfect observations in practice, you end up observing the outputs of models, as suggested in the original post.
For example, how about “relying on the accumulated knowledge of others”?
What is it that makes their accumulated knowledge worthy of being relied upon ?
you end up observing the outputs of models, as suggested in the original post.
I agree with pragmatist (the OP) that this is a problem for the correspondence theory of truth.
What is it that makes their accumulated knowledge worthy of being relied upon ?
Usefulness? Just don’t say “experimental evidence”. Don’t oversimplify epistemic justification. There are many aspects—how well knowledge fits with existing models, with observations, what is it’s predictive power, what is it’s instrumental value (does it help to achieve one’s goals) etc. For example, we don’t have any experimental evidence that smoking causes cancer in humans, but we nevertheless believe that is does. The power of Bayesian approach is in the mechanism to fuse together all these different forms of evidence and to arrive at a single posterior probability.
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
Solomonoff induction provides a universal standard for “perfect” inductive inference, that is, learning from observations. It is not entirely parameter-free, so it’s “a standard”, not “the standard”. I doubt if there is the standard for the same reasons I doubt that Platonic Truth does exist.
Umm, no, this is a false dichotomy. There is a large area in between “relying on one’s intuition” and “relying on an objective external word”. For example, how about “relying on the accumulated knowledge of others”?
See also my comment in the other thread.
Right, but I meant, in practice.
Observations of what ? Since you do not have access to infinite computation or perfect observations in practice, you end up observing the outputs of models, as suggested in the original post.
What is it that makes their accumulated knowledge worthy of being relied upon ?
I agree with pragmatist (the OP) that this is a problem for the correspondence theory of truth.
Usefulness? Just don’t say “experimental evidence”. Don’t oversimplify epistemic justification. There are many aspects—how well knowledge fits with existing models, with observations, what is it’s predictive power, what is it’s instrumental value (does it help to achieve one’s goals) etc. For example, we don’t have any experimental evidence that smoking causes cancer in humans, but we nevertheless believe that is does. The power of Bayesian approach is in the mechanism to fuse together all these different forms of evidence and to arrive at a single posterior probability.