It applies to any formal system capable of proving theorems of number theory.
Right. So if humans reasoning follows some specified formal system, they can’t prove it. But does it really follow one?
We can’t, for example, point to some Turing machine and say “It halts because of (...), but I can’t prove it”—because in doing so we’re already providing some sort of reasoning.
But then what do you mean by “possible to follow by a human”?
Maybe “it’s possible for a human, given enough time and resources, to verify validity of such proof”.
Right. So if humans reasoning follows some specified formal system, they can’t prove it. But does it really follow one?
Yes and no. It is likely that the brain, as a physical system, can be modeled by a formal system, but “the human brain is isomorphic to a formal system” does not imply “a human’s knowledge of some fact is isomorphic to a formal proof”. What human brains do (and, most likely, what an advanced AI would do) is approximate empirical reasoning, i.e. Bayesian reasoning, even in its acquisition of knowledge about mathematical truths. If you have P(X) = 1 then you have X = true, but youcan’tgetto P(X) = 1 through empirical reasoning, including by looking at a proof on a sheet of paper and thinking that it looks right. Even if you check it really really carefully. (All reasoning must have some empirical component.) Most likely, there is no structure in your brain that is isomorphic to a proof that 1 + 1 = 2, but you still know and use that fact.
So we (and AIs) can use intelligent reasoning about formal systems (not reasoning that looks like formal deduction from the inside) to come to very high or very low probability estimates for certain formally undecidable statements, as this does not need to be isomorphic to any impossible proofs in any actual formal system. This just doesn’t count as “solving the halting problem” (any more than Gödel’s ability to identify certain unprovable statements as true in the first place refutes his own theorem), because a solution to the halting problem must be at the level of formal proof, not of empirical reasoning; the latter is necessarily imprecise and probabilistic. Unless you think that a human “given enough time and resources” could literally always get an answer and always be right, a human cannot be a true halting oracle, even if they can correctly assign a very high or very low probability to some formally undecidable statements.
Perhaps. But would it be controversial or novel enough to warrant one? I’d think that most people here 1) already don’t believe that the human mind is more powerful than a universal Turing machine or a formal system, and 2) could correctly refute this type of argument, if they thought about it. Am I wrong about either of those (probably #2 if anything)? Or, perhaps, have sufficiently few people thought about it that bringing it up as a thought exercise (presenting the argument and encouraging people to evaluate it for themselves before looking at anyone else’s take) would be worthwhile, even if it doesn’t generally result in people changing their minds about anything?
It would be to some extent redundant with the posts you linked, but the specific point about the difference between human reasoning and formal reasoning is a new one to this blog. I, too, would be interested in reading it.
I think it could turn out really well if written with the relatively new lurkers in mind, and it does include a new idea that takes a few paragraphs to spell out well. That says “top-level” to me.
(1) Empirical vs Non-empirical is, I think, a bit of a red herring because insofar as empirical data (e.g. the output of a computer program) bears on mathematical questions, what we glean from it could all, in principle, have been deduced ‘a priori’ (i.e. entirely in the thinker’s mind, without any sensory engagement with the world.)
(2) You ought to read about Chaitin’s constant ‘Omega’, the ‘halting probability’, which is a number between 0 and 1.
I think we should be able to prove something along these lines: Assume that there is a constant K such that your “mental state” does not contain more than K bits of information (this seems horribly vague, but if we assume that the mind’s information is contained in the body’s information then we just need to assume that your body never requires more than K bits to ‘write down’).
Then it is impossible for you to ‘compress’ the binary expansion of Omega by more than K + L bits, for some constant L (the same L for all possible intelligent beings.)
This puts some very severe limits on how closely your ‘subjective probabilities’ for the bits of Omega can approach the real thing. For instance, either there must be only finitely many bits b where your subjective probability that b = 0 differs from 1⁄2, or else, if you guess something other than 1⁄2 infinitely many times, you must ‘guess wrongly’ exactly 1⁄2 of the time (with the pattern of correct and incorrect guesses being itself totally random).
Basically, it sounds like you’re saying: “If we’re prepared to let go of the demand to have strict, formal proofs, we can still acquire empirical evidence, even very convincing evidence, about the truth or falsity of mathematical statements.” This may be true in some cases, but there are others (like the bits of Omega) where we find mathematical facts (expressible as propositions of number theory) that are completely inaccessible by any means. (And in some way that I’m not quite sure yet how to express, I suspect that the ‘gap’ between the limits of ‘formal proof’ and ‘empirical reasoning’ is insignificant compared to the vast ‘terra incognita’ that lies beyond both.)
I myself will have to recheck this in the morning, as it’s 4:30 AM here and I am suspicious of philosophical reasoning I do while tired, but I’ll probably still agree with it tomorrow since I mostly copied that (with a bit of elaboration) from something I had already written elsewhere. :)
Right. So if humans reasoning follows some specified formal system, they can’t prove it. But does it really follow one?
We can’t, for example, point to some Turing machine and say “It halts because of (...), but I can’t prove it”—because in doing so we’re already providing some sort of reasoning.
Maybe “it’s possible for a human, given enough time and resources, to verify validity of such proof”.
Yes and no. It is likely that the brain, as a physical system, can be modeled by a formal system, but “the human brain is isomorphic to a formal system” does not imply “a human’s knowledge of some fact is isomorphic to a formal proof”. What human brains do (and, most likely, what an advanced AI would do) is approximate empirical reasoning, i.e. Bayesian reasoning, even in its acquisition of knowledge about mathematical truths. If you have P(X) = 1 then you have X = true, but you can’t get to P(X) = 1 through empirical reasoning, including by looking at a proof on a sheet of paper and thinking that it looks right. Even if you check it really really carefully. (All reasoning must have some empirical component.) Most likely, there is no structure in your brain that is isomorphic to a proof that 1 + 1 = 2, but you still know and use that fact.
So we (and AIs) can use intelligent reasoning about formal systems (not reasoning that looks like formal deduction from the inside) to come to very high or very low probability estimates for certain formally undecidable statements, as this does not need to be isomorphic to any impossible proofs in any actual formal system. This just doesn’t count as “solving the halting problem” (any more than Gödel’s ability to identify certain unprovable statements as true in the first place refutes his own theorem), because a solution to the halting problem must be at the level of formal proof, not of empirical reasoning; the latter is necessarily imprecise and probabilistic. Unless you think that a human “given enough time and resources” could literally always get an answer and always be right, a human cannot be a true halting oracle, even if they can correctly assign a very high or very low probability to some formally undecidable statements.
Well written— maybe this deserves a full post, even granted that the posts you linked are very near in concept-space.
Perhaps. But would it be controversial or novel enough to warrant one? I’d think that most people here 1) already don’t believe that the human mind is more powerful than a universal Turing machine or a formal system, and 2) could correctly refute this type of argument, if they thought about it. Am I wrong about either of those (probably #2 if anything)? Or, perhaps, have sufficiently few people thought about it that bringing it up as a thought exercise (presenting the argument and encouraging people to evaluate it for themselves before looking at anyone else’s take) would be worthwhile, even if it doesn’t generally result in people changing their minds about anything?
It would be to some extent redundant with the posts you linked, but the specific point about the difference between human reasoning and formal reasoning is a new one to this blog. I, too, would be interested in reading it.
You’re probably right about both, but I would still enjoy reading such a post.
I think it could turn out really well if written with the relatively new lurkers in mind, and it does include a new idea that takes a few paragraphs to spell out well. That says “top-level” to me.
Comments:
(1) Empirical vs Non-empirical is, I think, a bit of a red herring because insofar as empirical data (e.g. the output of a computer program) bears on mathematical questions, what we glean from it could all, in principle, have been deduced ‘a priori’ (i.e. entirely in the thinker’s mind, without any sensory engagement with the world.)
(2) You ought to read about Chaitin’s constant ‘Omega’, the ‘halting probability’, which is a number between 0 and 1.
I think we should be able to prove something along these lines: Assume that there is a constant K such that your “mental state” does not contain more than K bits of information (this seems horribly vague, but if we assume that the mind’s information is contained in the body’s information then we just need to assume that your body never requires more than K bits to ‘write down’).
Then it is impossible for you to ‘compress’ the binary expansion of Omega by more than K + L bits, for some constant L (the same L for all possible intelligent beings.)
This puts some very severe limits on how closely your ‘subjective probabilities’ for the bits of Omega can approach the real thing. For instance, either there must be only finitely many bits b where your subjective probability that b = 0 differs from 1⁄2, or else, if you guess something other than 1⁄2 infinitely many times, you must ‘guess wrongly’ exactly 1⁄2 of the time (with the pattern of correct and incorrect guesses being itself totally random).
Basically, it sounds like you’re saying: “If we’re prepared to let go of the demand to have strict, formal proofs, we can still acquire empirical evidence, even very convincing evidence, about the truth or falsity of mathematical statements.” This may be true in some cases, but there are others (like the bits of Omega) where we find mathematical facts (expressible as propositions of number theory) that are completely inaccessible by any means. (And in some way that I’m not quite sure yet how to express, I suspect that the ‘gap’ between the limits of ‘formal proof’ and ‘empirical reasoning’ is insignificant compared to the vast ‘terra incognita’ that lies beyond both.)
I’ll have to think some more about it, but this looks like a correct answer. Thank you.
I myself will have to recheck this in the morning, as it’s 4:30 AM here and I am suspicious of philosophical reasoning I do while tired, but I’ll probably still agree with it tomorrow since I mostly copied that (with a bit of elaboration) from something I had already written elsewhere. :)