The LW standard definition is that it’s about systematized winning. If the Matrix overlords punish everybody who tries to do systematized winning than it’s bad to engage in it. Especially when the Matrix overlords do it via mind reading.
The Christian God might see it as a sin.
If you don’t use the LW definition of rationalism, then rationalism and empiricism are not the same thing. Rationalism generally refers to gathering knowledge by reasoning as opposed to gathering it by other ways such as experiments or divine revelation.
they will tell you Rationalism can only find some but not all truths
Gödel did prove that it’s impossible to find all truths.
This website is called Lesswrong because it’s not about learning all truths but just about becoming less wrong.
Gödel did prove that it’s impossible to find all truths.
That’s misleading. With a finite amount of processing power/storage/etc, you can’t find all proofs in any infinite system. We need to show that short truths can’t be found, which is a bit harder.
I don’t think that’s correct. My best understanding of Godel’s theorem is that if your system of logic is powerful enough to express itself, then you can create a statement like “this sentence is unprovable”. That’s pretty short and doesn’t rely on infiniteness.
The statement “this sentence is unprovable” necessarily includes all information on how to prove things, so it’s always larger than your logical system. It’s usually much larger, because “this sentence” requires some tricks to encode.
I’m not sure how much space it would take to write down formally, and I’m not sure it matters. At worst it’s a few pages, but not entire books, let alone some exponentially huge thing you’d never encounter in reality.
It’s also not totally arbitrary axioms that would never be encountered in reality. There are reasons why someone might want to define the rules of logic within logic, and then 99% of the hard work is done.
But regardless, the interesting thing is that such an unprovable sentence exists at all. That its not possible to prove all true statements with any system of logic. It’s possible that the problem is limited to this single edge case, but for all I know these unprovable sentences could be everywhere. Or worse, that it is possible to prove them, and therefore possible to prove false statements.
I think the halting problem is related, but I don’t see how it’s exactly equivalent. In any case the halting problem work around is totally impractical, since it would take multiple ages of the universe to prove the haltingness of a simple loop. If you are referring to the limited memory version, otherwise I’m extremely skeptical.
At worst it’s a few pages, but not entire books, let alone some exponentially huge thing you’d never encounter in reality.
That’s only if your logical system is simple. If you’re a human, then the system you’re using is probably not a real logical system, and is anyway going to be rather large.
I think the halting problem is related, but I don’t see how it’s exactly equivalent.
If we’re talking about all truths, then a finiteness argument shows we can never get all truths, no need for Godel. Godel shows that given infinite computing power, we still can’t generate all truths, which seems irrelevant to the question.
If we can prove all truths smaller than the size of the universe, that would be pretty good, and it isn’t ruled out by Godel.
What do you mean with “Rationalism”?
The LW standard definition is that it’s about systematized winning. If the Matrix overlords punish everybody who tries to do systematized winning than it’s bad to engage in it. Especially when the Matrix overlords do it via mind reading. The Christian God might see it as a sin.
If you don’t use the LW definition of rationalism, then rationalism and empiricism are not the same thing. Rationalism generally refers to gathering knowledge by reasoning as opposed to gathering it by other ways such as experiments or divine revelation.
Gödel did prove that it’s impossible to find all truths. This website is called Lesswrong because it’s not about learning all truths but just about becoming less wrong.
That’s misleading. With a finite amount of processing power/storage/etc, you can’t find all proofs in any infinite system. We need to show that short truths can’t be found, which is a bit harder.
I don’t think that’s correct. My best understanding of Godel’s theorem is that if your system of logic is powerful enough to express itself, then you can create a statement like “this sentence is unprovable”. That’s pretty short and doesn’t rely on infiniteness.
The statement “this sentence is unprovable” necessarily includes all information on how to prove things, so it’s always larger than your logical system. It’s usually much larger, because “this sentence” requires some tricks to encode.
To see this another way, the halting problem can be seen as equivalent to Godel’s theorem. But it’s trivially possible to have a program of length X+C that solves the halting problem for all programs of length X, where C is a rather low constant; see https://en.wikipedia.org/wiki/Chaitin’s_constant#Relationship_to_the_halting_problem for how.
I’m not sure how much space it would take to write down formally, and I’m not sure it matters. At worst it’s a few pages, but not entire books, let alone some exponentially huge thing you’d never encounter in reality.
It’s also not totally arbitrary axioms that would never be encountered in reality. There are reasons why someone might want to define the rules of logic within logic, and then 99% of the hard work is done.
But regardless, the interesting thing is that such an unprovable sentence exists at all. That its not possible to prove all true statements with any system of logic. It’s possible that the problem is limited to this single edge case, but for all I know these unprovable sentences could be everywhere. Or worse, that it is possible to prove them, and therefore possible to prove false statements.
I think the halting problem is related, but I don’t see how it’s exactly equivalent. In any case the halting problem work around is totally impractical, since it would take multiple ages of the universe to prove the haltingness of a simple loop. If you are referring to the limited memory version, otherwise I’m extremely skeptical.
That’s only if your logical system is simple. If you’re a human, then the system you’re using is probably not a real logical system, and is anyway going to be rather large.
See http://www.solipsistslog.com/halting-consequences-godel/
DeVliegendeHollander post didn’t speak about short truths but about all truths.
If we’re talking about all truths, then a finiteness argument shows we can never get all truths, no need for Godel. Godel shows that given infinite computing power, we still can’t generate all truths, which seems irrelevant to the question.
If we can prove all truths smaller than the size of the universe, that would be pretty good, and it isn’t ruled out by Godel.
While Gödel killed HIlbert’s program as a matter of historical fact, it was later Tarski who proved the theorem that truth is undecidable.