I don’t think hypercomputation is an issue for algorithmic information theory as foundation for metaphysics/induction. The relevant question is, not whether the world contains hypercomputation, but whether our mind is capable of hypercomputation. And here it seems to me like the answer is “no”. Even if the answer was “yes”, we could probably treat the hypercomputing part of the mind as part of the environment. I wrote a little about it here.
Since the topic is metaphysics , and metaphysics is about what reality really is, the relevant question is whether the world contains hypercomputation.
Well, I am a “semi-instrumentalist”: I don’t think it is meaningful to ask what reality “really is” except for the projection of the reality on the “normative ontology”.
But you still don’t have an apriori guarantee that a computable model will succeed—that doesn’t follow from the claim that the human mind operated within computable limits. You could be facing evidence that all computable models must fail, in which case you should adopt a negative belief about physical/naturalism, even if you don’t adopt a positive belief in some supernatural model.
Well, you don’t have a guarantee that a computable model will succeed, but you do have some kind of guarantee that you’re doing your best, because computable models is all you have. If you’re using incomplete/fuzzy models, you can have a “doesn’t know anything” model in your prior, which is a sort of “negative belief about physical/naturalism”, but it is still within the same “quasi-Bayesian” framework.
I don’t think hypercomputation is an issue for algorithmic information theory as foundation for metaphysics/induction. The relevant question is, not whether the world contains hypercomputation, but whether our mind is capable of hypercomputation. And here it seems to me like the answer is “no”. Even if the answer was “yes”, we could probably treat the hypercomputing part of the mind as part of the environment. I wrote a little about it here.
Since the topic is metaphysics , and metaphysics is about what reality really is, the relevant question is whether the world contains hypercomputation.
Well, I am a “semi-instrumentalist”: I don’t think it is meaningful to ask what reality “really is” except for the projection of the reality on the “normative ontology”.
But you still don’t have an apriori guarantee that a computable model will succeed—that doesn’t follow from the claim that the human mind operated within computable limits. You could be facing evidence that all computable models must fail, in which case you should adopt a negative belief about physical/naturalism, even if you don’t adopt a positive belief in some supernatural model.
Well, you don’t have a guarantee that a computable model will succeed, but you do have some kind of guarantee that you’re doing your best, because computable models is all you have. If you’re using incomplete/fuzzy models, you can have a “doesn’t know anything” model in your prior, which is a sort of “negative belief about physical/naturalism”, but it is still within the same “quasi-Bayesian” framework.