So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”
Here, reality is merely a convenient term to use, which helps conceptualize errors in the map.
No, the point of the argument for realism in general is that it explains how prediction, in general, is possible.
That’s different from saying that the predictive ability of a specific theory is good evidence for the ontological accuracy of a specific theory.
What’s the point of having an explanation for why prediction works? I dispute that it’s actually an explanation—incoherent claims can’t serve as an explanation.
All the post on the usefulness of gears-models are about the point why having explanations is good. It generally helpful for engaging with a subject matter.
What’s the point of having an explanation of anything? We do things like science because we value explanations. It’s a bit much to say that we should give up on science , and it’s a bit suspicious that the one thing that doesn’t need explaining is the one thing your theory can’t explain.
Prediction, in general, is possible because Occam’s razor works. As I said in a different comment, realism doesn’t help explain Occam, and I am satisfied with EY’s argument for grounding Occam which doesn’t appear to require realism.
Prediction isn’t based on occams razor alone: you need the unuvers, reality, the place where your data are coming from to play nice by having compressible patterns. Which means that you need external reality.
No, the point of the argument for realism in general is that it explains how prediction, in general, is possible.
That’s different from saying that the predictive ability of a specific theory is good evidence for the ontological accuracy of a specific theory.
What’s the point of having an explanation for why prediction works? I dispute that it’s actually an explanation—incoherent claims can’t serve as an explanation.
All the post on the usefulness of gears-models are about the point why having explanations is good. It generally helpful for engaging with a subject matter.
Not when the explanation is incoherent and adds nothing to predictive ability or ease of calculation. It’s just an unnecessary assumption.
What’s the point of having an explanation of anything? We do things like science because we value explanations. It’s a bit much to say that we should give up on science , and it’s a bit suspicious that the one thing that doesn’t need explaining is the one thing your theory can’t explain.
Prediction, in general, is possible because Occam’s razor works. As I said in a different comment, realism doesn’t help explain Occam, and I am satisfied with EY’s argument for grounding Occam which doesn’t appear to require realism.
Prediction isn’t based on occams razor alone: you need the unuvers, reality, the place where your data are coming from to play nice by having compressible patterns. Which means that you need external reality.
All you need is that Occam works well for predicting your observations. Doesn’t require realism, that’s an added assumption.
Reality is where observations come from.
Yes, that’s the added assumption. It’s neither required nor coherent.