So what you’re saying here is, let’s say, “level 1 negative” which means, very roughly, things like: We can’t formally define what truth is, our formal system must appeal to higher systems outside of it, we can’t prove consistency, etc.
What the Sequences say are, let’s say, “level 2 negative” which means verbatim what is stated in them, i.e., “a mathematical system cannot assert its own soundness without becoming inconsistent.” This literally says that if a mathematical system tried to assert its own soundness, it would become inconsistent. This is worse than what any of Löb’s Theorem, Gödel’s Theorems, or Tarski’s Theorem tells us. AFAICT, the Sequences do consider the issue of soundness-assertion important, because it relates to recursively self-improving AI’s, who would be inclined to want to do so, presumably.
When I’m only level 1 negative, where I am when I want to believe in what the mathematical community says, then I’m not really in that bad shape, because I kind of only need to keep a somewhat distantly curious ear open for the somewhat erudite intellectual curiosity of whether or not this system will ever stop working.
When I’m level 2 negative, I might be either a person or an AGI wanting to know if it is possible to recursively self-improve, and I might decide that if I tried to assert my own soundness, I would go bad or become insane or become inconsistent, and therefore not do so—thus deferring to authority.
Pretty sure this is my last comment, because what you just quoted about soundness is, in fact, a direct consequence of Löb’s Theorem. For any proposition P, Löb’s Theorem says that □(□P→P)→□P. Let P be a statement already disproven, e.g. “2+2=5”. This means we already had □¬P, and now we have □(¬P & P), which is what inconsistency means. Again, it seemed like you understood this earlier.
So what you’re saying here is, let’s say, “level 1 negative” which means, very roughly, things like: We can’t formally define what truth is, our formal system must appeal to higher systems outside of it, we can’t prove consistency, etc.
What the Sequences say are, let’s say, “level 2 negative” which means verbatim what is stated in them, i.e., “a mathematical system cannot assert its own soundness without becoming inconsistent.” This literally says that if a mathematical system tried to assert its own soundness, it would become inconsistent. This is worse than what any of Löb’s Theorem, Gödel’s Theorems, or Tarski’s Theorem tells us. AFAICT, the Sequences do consider the issue of soundness-assertion important, because it relates to recursively self-improving AI’s, who would be inclined to want to do so, presumably.
When I’m only level 1 negative, where I am when I want to believe in what the mathematical community says, then I’m not really in that bad shape, because I kind of only need to keep a somewhat distantly curious ear open for the somewhat erudite intellectual curiosity of whether or not this system will ever stop working.
When I’m level 2 negative, I might be either a person or an AGI wanting to know if it is possible to recursively self-improve, and I might decide that if I tried to assert my own soundness, I would go bad or become insane or become inconsistent, and therefore not do so—thus deferring to authority.
Pretty sure this is my last comment, because what you just quoted about soundness is, in fact, a direct consequence of Löb’s Theorem. For any proposition P, Löb’s Theorem says that □(□P→P)→□P. Let P be a statement already disproven, e.g. “2+2=5”. This means we already had □¬P, and now we have □(¬P & P), which is what inconsistency means. Again, it seemed like you understood this earlier.