Well, I am a “semi-instrumentalist”: I don’t think it is meaningful to ask what reality “really is” except for the projection of the reality on the “normative ontology”.
But you still don’t have an apriori guarantee that a computable model will succeed—that doesn’t follow from the claim that the human mind operated within computable limits. You could be facing evidence that all computable models must fail, in which case you should adopt a negative belief about physical/naturalism, even if you don’t adopt a positive belief in some supernatural model.
Well, you don’t have a guarantee that a computable model will succeed, but you do have some kind of guarantee that you’re doing your best, because computable models is all you have. If you’re using incomplete/fuzzy models, you can have a “doesn’t know anything” model in your prior, which is a sort of “negative belief about physical/naturalism”, but it is still within the same “quasi-Bayesian” framework.
Well, I am a “semi-instrumentalist”: I don’t think it is meaningful to ask what reality “really is” except for the projection of the reality on the “normative ontology”.
But you still don’t have an apriori guarantee that a computable model will succeed—that doesn’t follow from the claim that the human mind operated within computable limits. You could be facing evidence that all computable models must fail, in which case you should adopt a negative belief about physical/naturalism, even if you don’t adopt a positive belief in some supernatural model.
Well, you don’t have a guarantee that a computable model will succeed, but you do have some kind of guarantee that you’re doing your best, because computable models is all you have. If you’re using incomplete/fuzzy models, you can have a “doesn’t know anything” model in your prior, which is a sort of “negative belief about physical/naturalism”, but it is still within the same “quasi-Bayesian” framework.