Thanks for referring to my comment :-) but now I think it wasn’t very good. Today my view of these things is much simpler.
Formal theories like PA support self-reference by the diagonal lemma, a.k.a. quining. You can formulate statements like “this statement is not provable”, which will in fact be true and not provable. But you can’t formulate “this statement is false” or “this statement is true”, due to Tarski’s undefinability of truth.
That’s pretty much the whole story to me. If the liar statement can’t be formulated in any formal theory, then it can’t be part of any math I care about. Thinking in terms of formal theories also gives simple answers to many other paradoxes, like Berry’s paradox and the Unexpected Hanging paradox.
Every part of math that I care about can perhaps be formulated in some formal theory, but (due to Tarski’s undefinability of truth) no single formal theory can express every part of math that I care about. This leads to a problem if I want to write down a UDT utility function that captures everything I care about, since it seems to imply that no formal theory is good enough for this purpose. (I think you read my posts on this. Did it slip your mind when writing the above, or do you no long think it’s a serious problem?)
I assume he’s claiming to care about a great deal of math, including at each stage the equivalent of Morse-Kelley as a whole rather than just the statements it makes about sets alone.
But I don’t know what post Wei Dai referred to, and I doubt I read it. Quick search finds this comment, which seems grossly misleading to me—we could easily program an AI to reason in an inconsistent system and print out absurdities like those we encounter from humans—but may have something to it.
Thanks for referring to my comment :-) but now I think it wasn’t very good. Today my view of these things is much simpler.
Formal theories like PA support self-reference by the diagonal lemma, a.k.a. quining. You can formulate statements like “this statement is not provable”, which will in fact be true and not provable. But you can’t formulate “this statement is false” or “this statement is true”, due to Tarski’s undefinability of truth.
That’s pretty much the whole story to me. If the liar statement can’t be formulated in any formal theory, then it can’t be part of any math I care about. Thinking in terms of formal theories also gives simple answers to many other paradoxes, like Berry’s paradox and the Unexpected Hanging paradox.
Every part of math that I care about can perhaps be formulated in some formal theory, but (due to Tarski’s undefinability of truth) no single formal theory can express every part of math that I care about. This leads to a problem if I want to write down a UDT utility function that captures everything I care about, since it seems to imply that no formal theory is good enough for this purpose. (I think you read my posts on this. Did it slip your mind when writing the above, or do you no long think it’s a serious problem?)
Can you explain in more detail why no single formal theory can express every part of math that you care about?
I just wrote a post on a related but simpler question, about math beliefs rather than math values. It might apply to your question as well.
I assume he’s claiming to care about a great deal of math, including at each stage the equivalent of Morse-Kelley as a whole rather than just the statements it makes about sets alone.
But I don’t know what post Wei Dai referred to, and I doubt I read it. Quick search finds this comment, which seems grossly misleading to me—we could easily program an AI to reason in an inconsistent system and print out absurdities like those we encounter from humans—but may have something to it.