A coherent formal system can’t fully define truth for its own language. It can give more limited definitions for the truth of some statement, but often this is best accomplished by just repeating the statement in question. (That idea is also due to Tarski: ‘snow is white’ is true if and only if snow is white.) You could loosely say (very loosely!) that a claim, in order to mean anything, needs to point to its own definition of what it would mean for that claim to be true. Any more general definition of truth, for a given language, needs to appeal to concepts which can’t be expressed within that language, in order to avoid a self-reference paradox.
So, there’s no comparison between applying the concept in your last paragraph to individual theorems you’ve already proven, like “2+2=4”—“my system thinks it has proved it, and believes that’s good enough to act on”, and the application to all hypothetical theorems you might prove later, like “2+2=5”. Those two ideas have different meanings—the latter can’t even be expressed within the language in a finite way, though it could be an infinite series of theorems or new axioms of the form □P→P—and they have wildly different consequences. You seem to get this when it comes to hypothetical proofs that eating babies is mandatory.
So what you’re saying here is, let’s say, “level 1 negative” which means, very roughly, things like: We can’t formally define what truth is, our formal system must appeal to higher systems outside of it, we can’t prove consistency, etc.
What the Sequences say are, let’s say, “level 2 negative” which means verbatim what is stated in them, i.e., “a mathematical system cannot assert its own soundness without becoming inconsistent.” This literally says that if a mathematical system tried to assert its own soundness, it would become inconsistent. This is worse than what any of Löb’s Theorem, Gödel’s Theorems, or Tarski’s Theorem tells us. AFAICT, the Sequences do consider the issue of soundness-assertion important, because it relates to recursively self-improving AI’s, who would be inclined to want to do so, presumably.
When I’m only level 1 negative, where I am when I want to believe in what the mathematical community says, then I’m not really in that bad shape, because I kind of only need to keep a somewhat distantly curious ear open for the somewhat erudite intellectual curiosity of whether or not this system will ever stop working.
When I’m level 2 negative, I might be either a person or an AGI wanting to know if it is possible to recursively self-improve, and I might decide that if I tried to assert my own soundness, I would go bad or become insane or become inconsistent, and therefore not do so—thus deferring to authority.
Pretty sure this is my last comment, because what you just quoted about soundness is, in fact, a direct consequence of Löb’s Theorem. For any proposition P, Löb’s Theorem says that □(□P→P)→□P. Let P be a statement already disproven, e.g. “2+2=5”. This means we already had □¬P, and now we have □(¬P & P), which is what inconsistency means. Again, it seemed like you understood this earlier.
https://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem
A coherent formal system can’t fully define truth for its own language. It can give more limited definitions for the truth of some statement, but often this is best accomplished by just repeating the statement in question. (That idea is also due to Tarski: ‘snow is white’ is true if and only if snow is white.) You could loosely say (very loosely!) that a claim, in order to mean anything, needs to point to its own definition of what it would mean for that claim to be true. Any more general definition of truth, for a given language, needs to appeal to concepts which can’t be expressed within that language, in order to avoid a self-reference paradox.
So, there’s no comparison between applying the concept in your last paragraph to individual theorems you’ve already proven, like “2+2=4”—“my system thinks it has proved it, and believes that’s good enough to act on”, and the application to all hypothetical theorems you might prove later, like “2+2=5”. Those two ideas have different meanings—the latter can’t even be expressed within the language in a finite way, though it could be an infinite series of theorems or new axioms of the form □P→P—and they have wildly different consequences. You seem to get this when it comes to hypothetical proofs that eating babies is mandatory.
So what you’re saying here is, let’s say, “level 1 negative” which means, very roughly, things like: We can’t formally define what truth is, our formal system must appeal to higher systems outside of it, we can’t prove consistency, etc.
What the Sequences say are, let’s say, “level 2 negative” which means verbatim what is stated in them, i.e., “a mathematical system cannot assert its own soundness without becoming inconsistent.” This literally says that if a mathematical system tried to assert its own soundness, it would become inconsistent. This is worse than what any of Löb’s Theorem, Gödel’s Theorems, or Tarski’s Theorem tells us. AFAICT, the Sequences do consider the issue of soundness-assertion important, because it relates to recursively self-improving AI’s, who would be inclined to want to do so, presumably.
When I’m only level 1 negative, where I am when I want to believe in what the mathematical community says, then I’m not really in that bad shape, because I kind of only need to keep a somewhat distantly curious ear open for the somewhat erudite intellectual curiosity of whether or not this system will ever stop working.
When I’m level 2 negative, I might be either a person or an AGI wanting to know if it is possible to recursively self-improve, and I might decide that if I tried to assert my own soundness, I would go bad or become insane or become inconsistent, and therefore not do so—thus deferring to authority.
Pretty sure this is my last comment, because what you just quoted about soundness is, in fact, a direct consequence of Löb’s Theorem. For any proposition P, Löb’s Theorem says that □(□P→P)→□P. Let P be a statement already disproven, e.g. “2+2=5”. This means we already had □¬P, and now we have □(¬P & P), which is what inconsistency means. Again, it seemed like you understood this earlier.