Every part of math that I care about can perhaps be formulated in some formal theory, but (due to Tarski’s undefinability of truth) no single formal theory can express every part of math that I care about. This leads to a problem if I want to write down a UDT utility function that captures everything I care about, since it seems to imply that no formal theory is good enough for this purpose. (I think you read my posts on this. Did it slip your mind when writing the above, or do you no long think it’s a serious problem?)
I assume he’s claiming to care about a great deal of math, including at each stage the equivalent of Morse-Kelley as a whole rather than just the statements it makes about sets alone.
But I don’t know what post Wei Dai referred to, and I doubt I read it. Quick search finds this comment, which seems grossly misleading to me—we could easily program an AI to reason in an inconsistent system and print out absurdities like those we encounter from humans—but may have something to it.
Every part of math that I care about can perhaps be formulated in some formal theory, but (due to Tarski’s undefinability of truth) no single formal theory can express every part of math that I care about. This leads to a problem if I want to write down a UDT utility function that captures everything I care about, since it seems to imply that no formal theory is good enough for this purpose. (I think you read my posts on this. Did it slip your mind when writing the above, or do you no long think it’s a serious problem?)
Can you explain in more detail why no single formal theory can express every part of math that you care about?
I just wrote a post on a related but simpler question, about math beliefs rather than math values. It might apply to your question as well.
I assume he’s claiming to care about a great deal of math, including at each stage the equivalent of Morse-Kelley as a whole rather than just the statements it makes about sets alone.
But I don’t know what post Wei Dai referred to, and I doubt I read it. Quick search finds this comment, which seems grossly misleading to me—we could easily program an AI to reason in an inconsistent system and print out absurdities like those we encounter from humans—but may have something to it.