Page 4, footnote 8: I don’t think it’s true that only stronger systems can prove weaker systems consistent. It can happen that system A can prove system B consistent and A and B are incomparable, with neither stronger than the other.
That is strictly correct, but not relevant for self-improving AI. You don’t want father AI that cannot prove everything that the child AI can prove. Maybe the footnote should be edited in this sense.
I don’t think this can’t happen, since A has proven Con(B), then it can now reason using system B for consistency purposes and get from the fact that B proves Con(A) to get A proving Con(A), which is bad.
That is strictly correct, but not relevant for self-improving AI. You don’t want father AI that cannot prove everything that the child AI can prove. Maybe the footnote should be edited in this sense.
Well, if A can prove everything B can, except for con(A), and B can prove everything A can, except for con(B), then you’re relatively happy.
ETA: retracted (thanks to Joshua Z for pointing out the error).
I don’t think this can’t happen, since A has proven Con(B), then it can now reason using system B for consistency purposes and get from the fact that B proves Con(A) to get A proving Con(A), which is bad.
Thanks for pointing this out. My mathematical logic is rusty.