I have felt probabilistic reasoning about mathematics is important ever since I learned about the Goedel machine, for a reason related but somewhat different from Loeb’s problem. Namely, the Goedel machine is limited to self-improvements provable within an axiom system A whereas humans seem to be able to use arbitrarily powerful axiom systems. It can be argued that Piano arithmetics is “intuitively true” but ZFC is only accepted by mathematicians because of empirical considerations: it just works. Similarly a self-improving AI should be able to accept empirical evidence for the consistency of an axiom systems more powerful than the system “hard-coded” within it. More generally, the utility expectation value in the criterion for a Goedel machine reprogramming itself should average not only over external physical realities (described by the Solomonoff semi-measure) but over mathematical possibilities as well
I have felt probabilistic reasoning about mathematics is important ever since I learned about the Goedel machine, for a reason related but somewhat different from Loeb’s problem. Namely, the Goedel machine is limited to self-improvements provable within an axiom system A whereas humans seem to be able to use arbitrarily powerful axiom systems. It can be argued that Piano arithmetics is “intuitively true” but ZFC is only accepted by mathematicians because of empirical considerations: it just works. Similarly a self-improving AI should be able to accept empirical evidence for the consistency of an axiom systems more powerful than the system “hard-coded” within it. More generally, the utility expectation value in the criterion for a Goedel machine reprogramming itself should average not only over external physical realities (described by the Solomonoff semi-measure) but over mathematical possibilities as well