It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
You are right, I am out of my depth math-wise. Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
You are right, I am out of my depth math-wise. Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
(Also yep.)