Any number that the AI puts out must be computable, and I was reminded of an entry in a largest computable number contest, that was “The runtime of the longest running turing machine that ZFC + large cardinality axiom can prove halts (With the proof being at most 3^^^3 symbols long). This is an almost optimal answer in that it is well defined if Con(ZFC + large cardinality axiom) and it beats any answer that you can give that relies only on ZFC+large cardinality axiom.
An AI asked to output the largest number it can, is playing a game of name the largest computable number.
Any number that the AI puts out must be computable, and I was reminded of an entry in a largest computable number contest, that was “The runtime of the longest running turing machine that ZFC + large cardinality axiom can prove halts (With the proof being at most 3^^^3 symbols long). This is an almost optimal answer in that it is well defined if Con(ZFC + large cardinality axiom) and it beats any answer that you can give that relies only on ZFC+large cardinality axiom.
An AI asked to output the largest number it can, is playing a game of name the largest computable number.
Wait, but can’t the AI also choose to adopt the strategy “build another computer with a larger largest computable number”?
If the computer has a finite amount of memory and can’t build more, this puts a 2n bound on how long it can weight. If it can build more, it will.
The point is that it needs to pick some long running computation that it can be fairly sure halts eventually.
This gets into details about exactly how the AI is handling logical uncertainty.