In what conceiveable (which does not imply logicality) universes would Rationalism not work in the sense of unearthing only some truths, not all truths? Some realms of truth would be hidden to Rationalists? To simplify it, I mean largely the aspect that of empiricism, of tying ideas to observations via prediction. What conceivable universes have non-observational truths, for example, Platonic/Kantian “pure apriori deduction” type of mental-only truths? Imagine for convenience’s sake a Matrix type simulated universe, not necessarily a natural one, so it does not really need to be lawful nor unfold from basic laws.
Reason for asking: if you head over to a site like The Orthosphere, they will tell you Rationalism can only find some but not all truths. And one good answer would be: “This could happen in universes of the type X, Y, Z. What are your reasons for thinking ours could be one of them?”
Don’t need to posit crazy things, just think about selection bias—are the sorts of people that tend to become rationalist randomly sampled from the population? If not, why wouldn’t there be blind spots in such people just based on that?
Yes, but if I get the idea right, it is to learn to think in a self-correcting, self-improving way. For example, maybe Kanazawa is right in intelligence suppressing instincts / common sense, but a consistent application of rationality sooner or later would lead to discovering it and forming strategies to correct it.
For this reason, it is more of the rules (of self-correction, self-improvement, self-updating sets of beliefs) than the people. What kinds of truths would be potentially invisible to a self-correcting observationalist ruleset even if this was practiced by all kinds of people?
Just pick any of a large set of things the LW-sphere gets consistently wrong. You can’t separate the “ism” from the people (the “ists”), in my opinion. The proof of the effectiveness of the “ism” lies in the “ists”.
A lot of opinions much of LW inherited uncritically from EY, for example. That isn’t to say that EY doesn’t have many correct opinions, he certainly does, but a lot of his opinions are also idiosyncratic, weird, and technically incorrect.
As is true for most of us. The recipe here is to be widely read (LW has a poor scholarship problem too). Not moving away from EY’s more idiosynchratic opinions is sort of a bad sign for the “ism.”
The reason being, the hypothetical implies a contradiction in rationality as Eliezer defines it; if rationalism requires atheism, and atheism doesn’t “win” as well as religion, then the “rationality is winning” definition Eliezer uses breaks; suddenly rationality, via winning, can require irrational behavior. Less Wrong has a -massive- blind spot where rationality is concerned; for a web site which spends a significant amount of time discussing how to update “correctness” algorithms, actually posing challenges to “correctness” algorithms is one of the quickest ways to shut somebody’s brain down and put them in a reactionary mode.
Eliezer was unable to -consider- the hypothetical; it “had” to be fought.
It seems to me that he did consider your hypothetical, and argued that it should be fought. I agree: your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, “Suppose P were true? Then P would be true!”
BTW, you never answered his answer. Should I conclude that you are unable to consider his answer?
Eliezer also has Harry Potter in MoR withholding knowledge of the True Patronus from Dumbledore, because he realises that Dumbledore would not be able to cast it, and would no longer be able to cast the ordinary Patronus.
Now, he has a war against the Dark Lord to fight, and cannot take the time and risk of trying to persuade Dumbledore to an inner conviction that death is a great evil in order to enable him to cast the True Patronus. It might be worth pursuing after winning that war, if they both survive.
your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, “Suppose P were true? Then P would be true!”
The hypothetical (P) is used to get people to draw some conclusions from it. These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P. Thus, all hypotheticals can be described, using your reasoning, as “Suppose P were true? Then P would be true!”
Furthermore, that also means “given Euclid’s premises, the sum of the angles of a triangle is 180 degrees” is a type of “Suppose P were true? Then P would be true!”—it begins with a P (Euclid’s premises) and concludes something that is logically equivalent to P.
I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as “Suppose P would be true? Then P would be true!” This makes OW’s hypothetical legitimate.
I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as “Suppose P would be true? Then P would be true!” This makes OW’s hypothetical legitimate.
The argument has to go some distance. OrphanWilde is simply writing his hypothesis into his conclusion.
His hypothetical is “suppose atheism doesn’t win”. His conclusion is not “then atheism doesn’t win”, so he’s not writing his hypothesis into his conclusion. Rather, his conclusion is “then rationality doesn’t mean what one of your other premises says it means”. That is not saying P and concluding P; it is saying P and concluding something logically equivalent to P.
These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P.
Of course it’s a misleading description, that’s my point. RK said that OW’s post was “Suppose P would be true? Then P would be true!” His reason for saying that, as far as I could tell, is that the conclusions of the hypothetical were logically implied by the hypothetical. I don’t buy that.
While the MoR example is a good one, don’t bother defending Eliezer’s response to the linked post. “Something bad is now arbitrarily good, what do you do?” is a poor strawman to counter “Two good things are opposed to each other in a trade space, how do you optimize?”
Don’t get me wrong, I like most of what Eliezer has put out here on this site, but it seems that he gets wound up pretty easily and off the cuff comments from him aren’t always as well reasoned as his main posts. To allow someone to slide based on the halo effect on a blog about rationality is just wrong. Calling people out when they do something wrong—and being civil about it—is constructive, and let’s not forget it’s in the name of the site.
“Something bad is now arbitrarily good, what do you do?” is a poor strawman to counter “Two good things are opposed to each other in a trade space, how do you optimize?”
OW’s linked post still looks to me more like “Two good things are hypothetically opposed to each other because I arbitrarily say so.”
If it isn’t worth trying to persuade (whoever), he shouldn’t have commented in the first place. There are -lots- of posts that go through Less Wrong. -That- one bothered him. Bothered him on a fundamental level.
As it was intended to.
I’ll note that it bothered you too. It was intended to.
And the parallel is… apt, although probably not in the way that you think. I’m not Dumbledore, in this parallel.
As for his question? It’s not meant for me. I wouldn’t agonize over the choice, and no matter what decision I made, I wouldn’t feel bad about it afterwards. I have zero issue considering the hypothetical, and find it an inelegant and blunt way of pitting two moral absolutes against one another in an attempt to force somebody else to admit to an ethical hierarchy. The fact that Eliezer himself described the baby eater hypothetical as one which must be fought is the intellectual equivalent to mining the road and running away; he, as far as I know, -invented- that hypothetical, he’s the one who set it up as the ultimate butcher block for non-utilitarian ethical systems.
“Some hypotheticals must be fought”, in this context, just means “That hypothetical is dangerous”. It isn’t, really. It just requires giving up a single falsehood:
That knowing the truth always makes you better off. That that which can be destroyed by the truth, should be.
He already implicitly accepts that lesson; his endless fiction of secret societies keeping dangerous knowledge from the rest of society demonstrate this. The truth doesn’t always make things better. The truth is a very amoral creature; it doesn’t care if things are made better, or worse, it just is. To call -that- a dangerous idea is just stubbornness.
Not to say there -isn’t- danger in that post, but it is not, in fact, from the hypothetical.
We may disagree about what it means to “disagree”.
Eliezer’s complete response to your original posting was:
Would you kill babies if it was intrinsically the right thing to do? If not, under what other circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?
EDIT IN RESPONSE: My intended point had been that sometimes you do have to fight the hypothetical.
This, you take as evidence that he is “bothered on a fundamental level”, and you imply that this being “bothered on a fundamental level”, whatever that is, is evidence that he is wrong and should just give up the “simple falsehood” that truth is desirable.
This is argument by trying to bother people and claiming victory when you judge them to be bothered.
Since my argument in this case is that people can be “bothered”, then yes, it would be a victory.
However, since as far as I know Eliezer didn’t claim to be “unbotherable”, that doesn’t make Eliezer wrong, at least within the context of that discussion. Eliezer didn’t disagree with me, he simply refused the legitimacy of the hypothetical.
I ’ve notice that problem, but I think it is a bit dramatic to call it rationality breaking. I think it’s more of a problem of calling two things, the winning thing amd the truth seeking thing, by one name.
Do you really think there’s a strong firewall in the minds of most of this community between the two concepts?
More, do you think the word “rationality”, in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one’s identity?
Eliezer’s sequences certainly don’t treat the two ideas as distinct. Indeed, if they did, we’d be calling “the winning thing” by its proper name, pragmatism.
More, do you think the word “rationality”, in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one’s identity?
Which values am I supposed to answere that by? Obviously it would be bad by e rationality, but it keeps going because i rationality brings benefits to people who can create a united front against the Enemy,
That presumes an enemy. If deliberate, the most likely candidate for the enemy in this case, to my eyes, would be the epistemological rationalists themselves.
QM: Having strong positive beliefs on the subject would be not-even-wrong. Ruling out some is much less so. And that’s what he did. Note, I came to the same conclusion long before.
MIRI: It’s not uncritically accepted on LW more than you’d expect given who runs the joint.
Identity: If you’re not letting it trap you by thinking it makes you right, if you’re not letting it trap you by thinking it makes others wrong, then what dangers are you thinking of? People will get identities. This particular one seems well-suited to mitigating the dangers of identities.
Depends on how you decide what truth is, and what qualifies it to be “unearthed.”
But for one universe in which some truth, for some value of truth, can be unearthed, for some value of unearthed, while other truth can’t be:
Imagine a universe in which 12.879% (exactly) of all matter is a unique kind of matter that shares no qualities in common with any other matter, and is almost entirely nonreactive with all other kinds of matter, and was created by a process not shared in common with any other matter, which had no effect whatsoever on any other matter. Any truths about this matter, including its existence and the percentage of the universe composed of it, would be completely non-observational. The only reaction this matter has with any other matter is when it is in a specific configuration which requires extremely high levels of the local equivalent of negative entropy, at which point it emits a single electromagnetic pulse. This was used once by an intelligence species composed of this unique matter who then went on to die in massive wars, to encode in a series of flashes of light every detail they knew about physics, and was observed by one human-equivalent monk ascetic, who used a language similar to morse code to write down the sequence of pulses, which he described as a holy vision. Centuries later, these pulses were translated into mathematical equations which described the unique physics of this concurrent universe of exotic matter, but no mechanism of proving the existence or nonexistence of this exotic matter, save that the equations are far beyond the mathematics of anyone alive at the time the signal was encoded, and it has become a controversial matter whether or not it was an elaborate hoax by a genius.
The LW standard definition is that it’s about systematized winning. If the Matrix overlords punish everybody who tries to do systematized winning than it’s bad to engage in it. Especially when the Matrix overlords do it via mind reading.
The Christian God might see it as a sin.
If you don’t use the LW definition of rationalism, then rationalism and empiricism are not the same thing. Rationalism generally refers to gathering knowledge by reasoning as opposed to gathering it by other ways such as experiments or divine revelation.
they will tell you Rationalism can only find some but not all truths
Gödel did prove that it’s impossible to find all truths.
This website is called Lesswrong because it’s not about learning all truths but just about becoming less wrong.
Gödel did prove that it’s impossible to find all truths.
That’s misleading. With a finite amount of processing power/storage/etc, you can’t find all proofs in any infinite system. We need to show that short truths can’t be found, which is a bit harder.
I don’t think that’s correct. My best understanding of Godel’s theorem is that if your system of logic is powerful enough to express itself, then you can create a statement like “this sentence is unprovable”. That’s pretty short and doesn’t rely on infiniteness.
The statement “this sentence is unprovable” necessarily includes all information on how to prove things, so it’s always larger than your logical system. It’s usually much larger, because “this sentence” requires some tricks to encode.
I’m not sure how much space it would take to write down formally, and I’m not sure it matters. At worst it’s a few pages, but not entire books, let alone some exponentially huge thing you’d never encounter in reality.
It’s also not totally arbitrary axioms that would never be encountered in reality. There are reasons why someone might want to define the rules of logic within logic, and then 99% of the hard work is done.
But regardless, the interesting thing is that such an unprovable sentence exists at all. That its not possible to prove all true statements with any system of logic. It’s possible that the problem is limited to this single edge case, but for all I know these unprovable sentences could be everywhere. Or worse, that it is possible to prove them, and therefore possible to prove false statements.
I think the halting problem is related, but I don’t see how it’s exactly equivalent. In any case the halting problem work around is totally impractical, since it would take multiple ages of the universe to prove the haltingness of a simple loop. If you are referring to the limited memory version, otherwise I’m extremely skeptical.
At worst it’s a few pages, but not entire books, let alone some exponentially huge thing you’d never encounter in reality.
That’s only if your logical system is simple. If you’re a human, then the system you’re using is probably not a real logical system, and is anyway going to be rather large.
I think the halting problem is related, but I don’t see how it’s exactly equivalent.
If we’re talking about all truths, then a finiteness argument shows we can never get all truths, no need for Godel. Godel shows that given infinite computing power, we still can’t generate all truths, which seems irrelevant to the question.
If we can prove all truths smaller than the size of the universe, that would be pretty good, and it isn’t ruled out by Godel.
There’s no guarantee we should be able to find any truths using any method. It’s a miracle that the universe is at all comprehensible. The question isn’t “when can’t we learn everything?”, it’s “why can we learn anything at all?”.
Does vaccination imply memory?.. Does being warned by another’s volatile metabolites that a herbivore is attacking the population?
(Higher) plants are organized by very different principles than animals; it is a never-ending debate on what constitutes ‘identity’ in them. Without first deciding upon that, can one speak about learning? I don’t think they have it, but their patterns of predetermined answers can be very specific.
Also, there is an interesting study, ‘Kin recognition, not competitive interactions, predicts root allocation in young Cakile edentula seedling pairs’. This seems to be more difficult to do than following the sun!
That would explain why all entities learn. Not why any entities learn. Ignoring things that can’t learn doesn’t explain the existence if things that can.
A more useful question to ask would be “how do entities, in fact, learn?” This avoids the trite answer, “because if they didn’t, we wouldn’t be asking the question”.
I think if we follows this chain of questions, what we’ll find at the end (except for turtles, of course) is the question “Why is the universe stable/regular instead of utterly chaotic?” A similar question is “Why does the universe even have negentropy?”
I don’t know any answer to these questions except for “That’s what our universe is”.
I suppose what I want to know is the answer to “What features of our universe make it possible for entities to learn?”.
Which sounds remarkably similar to DeVliegendeHollander’s question, perhaps with an implicit assumption that learning won’t be present in many (most?) universes.
For that matter, a world in which it is impossible for an organism to become better at surviving by modeling its environment (i.e. learning) is one in which intelligence can’t evolve.
(And a world in which it is impossible for one organism to be better at surviving than another organism, is one in which evolution doesn’t happen at all; indeed, life wouldn’t happen.)
A universe where humans are running on brains with certain glitches that prevent them from coming to correct conclusions through reasoning about specific topics.
In what conceiveable (which does not imply logicality) universes would Rationalism not work in the sense of unearthing only some truths, not all truths? Some realms of truth would be hidden to Rationalists? To simplify it, I mean largely the aspect that of empiricism, of tying ideas to observations via prediction. What conceivable universes have non-observational truths, for example, Platonic/Kantian “pure apriori deduction” type of mental-only truths? Imagine for convenience’s sake a Matrix type simulated universe, not necessarily a natural one, so it does not really need to be lawful nor unfold from basic laws.
Reason for asking: if you head over to a site like The Orthosphere, they will tell you Rationalism can only find some but not all truths. And one good answer would be: “This could happen in universes of the type X, Y, Z. What are your reasons for thinking ours could be one of them?”
Don’t need to posit crazy things, just think about selection bias—are the sorts of people that tend to become rationalist randomly sampled from the population? If not, why wouldn’t there be blind spots in such people just based on that?
Yes, but if I get the idea right, it is to learn to think in a self-correcting, self-improving way. For example, maybe Kanazawa is right in intelligence suppressing instincts / common sense, but a consistent application of rationality sooner or later would lead to discovering it and forming strategies to correct it.
For this reason, it is more of the rules (of self-correction, self-improvement, self-updating sets of beliefs) than the people. What kinds of truths would be potentially invisible to a self-correcting observationalist ruleset even if this was practiced by all kinds of people?
Just pick any of a large set of things the LW-sphere gets consistently wrong. You can’t separate the “ism” from the people (the “ists”), in my opinion. The proof of the effectiveness of the “ism” lies in the “ists”.
Which things are you thinking of?
A lot of opinions much of LW inherited uncritically from EY, for example. That isn’t to say that EY doesn’t have many correct opinions, he certainly does, but a lot of his opinions are also idiosyncratic, weird, and technically incorrect.
As is true for most of us. The recipe here is to be widely read (LW has a poor scholarship problem too). Not moving away from EY’s more idiosynchratic opinions is sort of a bad sign for the “ism.”
Could you mention some of the specific beliefs you think are wrong?
Having strong opinions on QM interpretations is “not even wrong.”
LW’s attitude on B is, at best, “arguable.”
Donating to MIRI as an effective use of money is, at best, “arguable.”
LW consequentialism is, at best, “arguable.”
Shitting on philosophy.
Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.
etc.
What I personally find valuable is “adapting the rationalist kung fu stance” for certain purposes.
Thank you.
B?
Bayesian.
I read that “B” and assumed that you had a reason for not spelling it out, so I concluded that you meant Basilisk.
Sorry, bad habit, I guess.
[Edited formatting] Strongly agree. http://lesswrong.com/lw/huk/emotional_basilisks/ is an experiment I ran which demonstrates the issue. Eliezer was unable to -consider- the hypothetical; it “had” to be fought.
The reason being, the hypothetical implies a contradiction in rationality as Eliezer defines it; if rationalism requires atheism, and atheism doesn’t “win” as well as religion, then the “rationality is winning” definition Eliezer uses breaks; suddenly rationality, via winning, can require irrational behavior. Less Wrong has a -massive- blind spot where rationality is concerned; for a web site which spends a significant amount of time discussing how to update “correctness” algorithms, actually posing challenges to “correctness” algorithms is one of the quickest ways to shut somebody’s brain down and put them in a reactionary mode.
It seems to me that he did consider your hypothetical, and argued that it should be fought. I agree: your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, “Suppose P were true? Then P would be true!”
BTW, you never answered his answer. Should I conclude that you are unable to consider his answer?
Eliezer also has Harry Potter in MoR withholding knowledge of the True Patronus from Dumbledore, because he realises that Dumbledore would not be able to cast it, and would no longer be able to cast the ordinary Patronus.
Now, he has a war against the Dark Lord to fight, and cannot take the time and risk of trying to persuade Dumbledore to an inner conviction that death is a great evil in order to enable him to cast the True Patronus. It might be worth pursuing after winning that war, if they both survive.
All this has a parallel with your hypothetical.
The hypothetical (P) is used to get people to draw some conclusions from it. These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P. Thus, all hypotheticals can be described, using your reasoning, as “Suppose P were true? Then P would be true!”
Furthermore, that also means “given Euclid’s premises, the sum of the angles of a triangle is 180 degrees” is a type of “Suppose P were true? Then P would be true!”—it begins with a P (Euclid’s premises) and concludes something that is logically equivalent to P.
I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as “Suppose P would be true? Then P would be true!” This makes OW’s hypothetical legitimate.
The argument has to go some distance. OrphanWilde is simply writing his hypothesis into his conclusion.
His hypothetical is “suppose atheism doesn’t win”. His conclusion is not “then atheism doesn’t win”, so he’s not writing his hypothesis into his conclusion. Rather, his conclusion is “then rationality doesn’t mean what one of your other premises says it means”. That is not saying P and concluding P; it is saying P and concluding something logically equivalent to P.
But that would be a misleading description.
Of course it’s a misleading description, that’s my point. RK said that OW’s post was “Suppose P would be true? Then P would be true!” His reason for saying that, as far as I could tell, is that the conclusions of the hypothetical were logically implied by the hypothetical. I don’t buy that.
While the MoR example is a good one, don’t bother defending Eliezer’s response to the linked post. “Something bad is now arbitrarily good, what do you do?” is a poor strawman to counter “Two good things are opposed to each other in a trade space, how do you optimize?”
Don’t get me wrong, I like most of what Eliezer has put out here on this site, but it seems that he gets wound up pretty easily and off the cuff comments from him aren’t always as well reasoned as his main posts. To allow someone to slide based on the halo effect on a blog about rationality is just wrong. Calling people out when they do something wrong—and being civil about it—is constructive, and let’s not forget it’s in the name of the site.
OW’s linked post still looks to me more like “Two good things are hypothetically opposed to each other because I arbitrarily say so.”
If it isn’t worth trying to persuade (whoever), he shouldn’t have commented in the first place. There are -lots- of posts that go through Less Wrong. -That- one bothered him. Bothered him on a fundamental level.
As it was intended to.
I’ll note that it bothered you too. It was intended to.
And the parallel is… apt, although probably not in the way that you think. I’m not Dumbledore, in this parallel.
As for his question? It’s not meant for me. I wouldn’t agonize over the choice, and no matter what decision I made, I wouldn’t feel bad about it afterwards. I have zero issue considering the hypothetical, and find it an inelegant and blunt way of pitting two moral absolutes against one another in an attempt to force somebody else to admit to an ethical hierarchy. The fact that Eliezer himself described the baby eater hypothetical as one which must be fought is the intellectual equivalent to mining the road and running away; he, as far as I know, -invented- that hypothetical, he’s the one who set it up as the ultimate butcher block for non-utilitarian ethical systems.
“Some hypotheticals must be fought”, in this context, just means “That hypothetical is dangerous”. It isn’t, really. It just requires giving up a single falsehood:
That knowing the truth always makes you better off. That that which can be destroyed by the truth, should be.
He already implicitly accepts that lesson; his endless fiction of secret societies keeping dangerous knowledge from the rest of society demonstrate this. The truth doesn’t always make things better. The truth is a very amoral creature; it doesn’t care if things are made better, or worse, it just is. To call -that- a dangerous idea is just stubbornness.
Not to say there -isn’t- danger in that post, but it is not, in fact, from the hypothetical.
Ah. People disagreeing prove you right.
We may disagree about what it means to “disagree”.
Eliezer’s complete response to your original posting was:
This, you take as evidence that he is “bothered on a fundamental level”, and you imply that this being “bothered on a fundamental level”, whatever that is, is evidence that he is wrong and should just give up the “simple falsehood” that truth is desirable.
This is argument by trying to bother people and claiming victory when you judge them to be bothered.
Since my argument in this case is that people can be “bothered”, then yes, it would be a victory.
However, since as far as I know Eliezer didn’t claim to be “unbotherable”, that doesn’t make Eliezer wrong, at least within the context of that discussion. Eliezer didn’t disagree with me, he simply refused the legitimacy of the hypothetical.
I ’ve notice that problem, but I think it is a bit dramatic to call it rationality breaking. I think it’s more of a problem of calling two things, the winning thing amd the truth seeking thing, by one name.
Do you really think there’s a strong firewall in the minds of most of this community between the two concepts?
More, do you think the word “rationality”, in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one’s identity?
Eliezer’s sequences certainly don’t treat the two ideas as distinct. Indeed, if they did, we’d be calling “the winning thing” by its proper name, pragmatism.
Which values am I supposed to answere that by? Obviously it would be bad by e rationality, but it keeps going because i rationality brings benefits to people who can create a united front against the Enemy,
That presumes an enemy. If deliberate, the most likely candidate for the enemy in this case, to my eyes, would be the epistemological rationalists themselves.
I was thinking of the fundies
I don’t think that’s argued. It’s also worth noting that the majority of MIRI’s funding over it’s history comes from a theist.
Well...
QM: Having strong positive beliefs on the subject would be not-even-wrong. Ruling out some is much less so. And that’s what he did. Note, I came to the same conclusion long before.
MIRI: It’s not uncritically accepted on LW more than you’d expect given who runs the joint.
Identity: If you’re not letting it trap you by thinking it makes you right, if you’re not letting it trap you by thinking it makes others wrong, then what dangers are you thinking of? People will get identities. This particular one seems well-suited to mitigating the dangers of identities.
Others: more clarification required
I think there’s plenty of criticism voiced about that concept on LW and there are articles advocating to keep one’s identity small.
And yet...
From time to time people use the label aspiring rationalist but I don’t think a majority of people on LW do.
Depends on how you decide what truth is, and what qualifies it to be “unearthed.”
But for one universe in which some truth, for some value of truth, can be unearthed, for some value of unearthed, while other truth can’t be:
Imagine a universe in which 12.879% (exactly) of all matter is a unique kind of matter that shares no qualities in common with any other matter, and is almost entirely nonreactive with all other kinds of matter, and was created by a process not shared in common with any other matter, which had no effect whatsoever on any other matter. Any truths about this matter, including its existence and the percentage of the universe composed of it, would be completely non-observational. The only reaction this matter has with any other matter is when it is in a specific configuration which requires extremely high levels of the local equivalent of negative entropy, at which point it emits a single electromagnetic pulse. This was used once by an intelligence species composed of this unique matter who then went on to die in massive wars, to encode in a series of flashes of light every detail they knew about physics, and was observed by one human-equivalent monk ascetic, who used a language similar to morse code to write down the sequence of pulses, which he described as a holy vision. Centuries later, these pulses were translated into mathematical equations which described the unique physics of this concurrent universe of exotic matter, but no mechanism of proving the existence or nonexistence of this exotic matter, save that the equations are far beyond the mathematics of anyone alive at the time the signal was encoded, and it has become a controversial matter whether or not it was an elaborate hoax by a genius.
What do you mean with “Rationalism”?
The LW standard definition is that it’s about systematized winning. If the Matrix overlords punish everybody who tries to do systematized winning than it’s bad to engage in it. Especially when the Matrix overlords do it via mind reading. The Christian God might see it as a sin.
If you don’t use the LW definition of rationalism, then rationalism and empiricism are not the same thing. Rationalism generally refers to gathering knowledge by reasoning as opposed to gathering it by other ways such as experiments or divine revelation.
Gödel did prove that it’s impossible to find all truths. This website is called Lesswrong because it’s not about learning all truths but just about becoming less wrong.
That’s misleading. With a finite amount of processing power/storage/etc, you can’t find all proofs in any infinite system. We need to show that short truths can’t be found, which is a bit harder.
I don’t think that’s correct. My best understanding of Godel’s theorem is that if your system of logic is powerful enough to express itself, then you can create a statement like “this sentence is unprovable”. That’s pretty short and doesn’t rely on infiniteness.
The statement “this sentence is unprovable” necessarily includes all information on how to prove things, so it’s always larger than your logical system. It’s usually much larger, because “this sentence” requires some tricks to encode.
To see this another way, the halting problem can be seen as equivalent to Godel’s theorem. But it’s trivially possible to have a program of length X+C that solves the halting problem for all programs of length X, where C is a rather low constant; see https://en.wikipedia.org/wiki/Chaitin’s_constant#Relationship_to_the_halting_problem for how.
I’m not sure how much space it would take to write down formally, and I’m not sure it matters. At worst it’s a few pages, but not entire books, let alone some exponentially huge thing you’d never encounter in reality.
It’s also not totally arbitrary axioms that would never be encountered in reality. There are reasons why someone might want to define the rules of logic within logic, and then 99% of the hard work is done.
But regardless, the interesting thing is that such an unprovable sentence exists at all. That its not possible to prove all true statements with any system of logic. It’s possible that the problem is limited to this single edge case, but for all I know these unprovable sentences could be everywhere. Or worse, that it is possible to prove them, and therefore possible to prove false statements.
I think the halting problem is related, but I don’t see how it’s exactly equivalent. In any case the halting problem work around is totally impractical, since it would take multiple ages of the universe to prove the haltingness of a simple loop. If you are referring to the limited memory version, otherwise I’m extremely skeptical.
That’s only if your logical system is simple. If you’re a human, then the system you’re using is probably not a real logical system, and is anyway going to be rather large.
See http://www.solipsistslog.com/halting-consequences-godel/
DeVliegendeHollander post didn’t speak about short truths but about all truths.
If we’re talking about all truths, then a finiteness argument shows we can never get all truths, no need for Godel. Godel shows that given infinite computing power, we still can’t generate all truths, which seems irrelevant to the question.
If we can prove all truths smaller than the size of the universe, that would be pretty good, and it isn’t ruled out by Godel.
While Gödel killed HIlbert’s program as a matter of historical fact, it was later Tarski who proved the theorem that truth is undecidable.
There’s no guarantee we should be able to find any truths using any method. It’s a miracle that the universe is at all comprehensible. The question isn’t “when can’t we learn everything?”, it’s “why can we learn anything at all?”.
Because entities which can’t do not survive.
Counterexample: Plants. Do they learn?
Of course. Leaves turn to follow the sun, roots grow in the direction of more moist soil...
Is that really learning, or just reacting to stimuli in a fixed, predetermined pattern?
Does vaccination imply memory?.. Does being warned by another’s volatile metabolites that a herbivore is attacking the population?
(Higher) plants are organized by very different principles than animals; it is a never-ending debate on what constitutes ‘identity’ in them. Without first deciding upon that, can one speak about learning? I don’t think they have it, but their patterns of predetermined answers can be very specific.
Also, there is an interesting study, ‘Kin recognition, not competitive interactions, predicts root allocation in young Cakile edentula seedling pairs’. This seems to be more difficult to do than following the sun!
That just pushes the question back a step. Why can any entity learn?
In the spirit of Lumifer’s comment, anything we would consider an entity would have to be able to learn or we wouldn’t be considering it at all.
That would explain why all entities learn. Not why any entities learn. Ignoring things that can’t learn doesn’t explain the existence if things that can.
A more useful question to ask would be “how do entities, in fact, learn?” This avoids the trite answer, “because if they didn’t, we wouldn’t be asking the question”.
I think if we follows this chain of questions, what we’ll find at the end (except for turtles, of course) is the question “Why is the universe stable/regular instead of utterly chaotic?” A similar question is “Why does the universe even have negentropy?”
I don’t know any answer to these questions except for “That’s what our universe is”.
I suppose what I want to know is the answer to “What features of our universe make it possible for entities to learn?”.
Which sounds remarkably similar to DeVliegendeHollander’s question, perhaps with an implicit assumption that learning won’t be present in many (most?) universes.
The fact that the universe is stable/regular enough to be predictable. Subject predictability is a necessary requirement for learning.
For that matter, a world in which it is impossible for an organism to become better at surviving by modeling its environment (i.e. learning) is one in which intelligence can’t evolve.
(And a world in which it is impossible for one organism to be better at surviving than another organism, is one in which evolution doesn’t happen at all; indeed, life wouldn’t happen.)
A universe where humans are running on brains with certain glitches that prevent them from coming to correct conclusions through reasoning about specific topics.