To me this problem looks more like a problem with making decisions based purely on proofs, and not much related to self-modification.
I think I was implicitly assuming that you wouldn’t have an agent making decisions based purely on proofs.
I think I was implicitly assuming that you wouldn’t have an agent making decisions based purely on proofs.