Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
(Also yep.)