Don’t need to posit crazy things, just think about selection bias—are the sorts of people that tend to become rationalist randomly sampled from the population? If not, why wouldn’t there be blind spots in such people just based on that?
Yes, but if I get the idea right, it is to learn to think in a self-correcting, self-improving way. For example, maybe Kanazawa is right in intelligence suppressing instincts / common sense, but a consistent application of rationality sooner or later would lead to discovering it and forming strategies to correct it.
For this reason, it is more of the rules (of self-correction, self-improvement, self-updating sets of beliefs) than the people. What kinds of truths would be potentially invisible to a self-correcting observationalist ruleset even if this was practiced by all kinds of people?
Just pick any of a large set of things the LW-sphere gets consistently wrong. You can’t separate the “ism” from the people (the “ists”), in my opinion. The proof of the effectiveness of the “ism” lies in the “ists”.
A lot of opinions much of LW inherited uncritically from EY, for example. That isn’t to say that EY doesn’t have many correct opinions, he certainly does, but a lot of his opinions are also idiosyncratic, weird, and technically incorrect.
As is true for most of us. The recipe here is to be widely read (LW has a poor scholarship problem too). Not moving away from EY’s more idiosynchratic opinions is sort of a bad sign for the “ism.”
The reason being, the hypothetical implies a contradiction in rationality as Eliezer defines it; if rationalism requires atheism, and atheism doesn’t “win” as well as religion, then the “rationality is winning” definition Eliezer uses breaks; suddenly rationality, via winning, can require irrational behavior. Less Wrong has a -massive- blind spot where rationality is concerned; for a web site which spends a significant amount of time discussing how to update “correctness” algorithms, actually posing challenges to “correctness” algorithms is one of the quickest ways to shut somebody’s brain down and put them in a reactionary mode.
Eliezer was unable to -consider- the hypothetical; it “had” to be fought.
It seems to me that he did consider your hypothetical, and argued that it should be fought. I agree: your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, “Suppose P were true? Then P would be true!”
BTW, you never answered his answer. Should I conclude that you are unable to consider his answer?
Eliezer also has Harry Potter in MoR withholding knowledge of the True Patronus from Dumbledore, because he realises that Dumbledore would not be able to cast it, and would no longer be able to cast the ordinary Patronus.
Now, he has a war against the Dark Lord to fight, and cannot take the time and risk of trying to persuade Dumbledore to an inner conviction that death is a great evil in order to enable him to cast the True Patronus. It might be worth pursuing after winning that war, if they both survive.
your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, “Suppose P were true? Then P would be true!”
The hypothetical (P) is used to get people to draw some conclusions from it. These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P. Thus, all hypotheticals can be described, using your reasoning, as “Suppose P were true? Then P would be true!”
Furthermore, that also means “given Euclid’s premises, the sum of the angles of a triangle is 180 degrees” is a type of “Suppose P were true? Then P would be true!”—it begins with a P (Euclid’s premises) and concludes something that is logically equivalent to P.
I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as “Suppose P would be true? Then P would be true!” This makes OW’s hypothetical legitimate.
I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as “Suppose P would be true? Then P would be true!” This makes OW’s hypothetical legitimate.
The argument has to go some distance. OrphanWilde is simply writing his hypothesis into his conclusion.
His hypothetical is “suppose atheism doesn’t win”. His conclusion is not “then atheism doesn’t win”, so he’s not writing his hypothesis into his conclusion. Rather, his conclusion is “then rationality doesn’t mean what one of your other premises says it means”. That is not saying P and concluding P; it is saying P and concluding something logically equivalent to P.
These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P.
Of course it’s a misleading description, that’s my point. RK said that OW’s post was “Suppose P would be true? Then P would be true!” His reason for saying that, as far as I could tell, is that the conclusions of the hypothetical were logically implied by the hypothetical. I don’t buy that.
While the MoR example is a good one, don’t bother defending Eliezer’s response to the linked post. “Something bad is now arbitrarily good, what do you do?” is a poor strawman to counter “Two good things are opposed to each other in a trade space, how do you optimize?”
Don’t get me wrong, I like most of what Eliezer has put out here on this site, but it seems that he gets wound up pretty easily and off the cuff comments from him aren’t always as well reasoned as his main posts. To allow someone to slide based on the halo effect on a blog about rationality is just wrong. Calling people out when they do something wrong—and being civil about it—is constructive, and let’s not forget it’s in the name of the site.
“Something bad is now arbitrarily good, what do you do?” is a poor strawman to counter “Two good things are opposed to each other in a trade space, how do you optimize?”
OW’s linked post still looks to me more like “Two good things are hypothetically opposed to each other because I arbitrarily say so.”
If it isn’t worth trying to persuade (whoever), he shouldn’t have commented in the first place. There are -lots- of posts that go through Less Wrong. -That- one bothered him. Bothered him on a fundamental level.
As it was intended to.
I’ll note that it bothered you too. It was intended to.
And the parallel is… apt, although probably not in the way that you think. I’m not Dumbledore, in this parallel.
As for his question? It’s not meant for me. I wouldn’t agonize over the choice, and no matter what decision I made, I wouldn’t feel bad about it afterwards. I have zero issue considering the hypothetical, and find it an inelegant and blunt way of pitting two moral absolutes against one another in an attempt to force somebody else to admit to an ethical hierarchy. The fact that Eliezer himself described the baby eater hypothetical as one which must be fought is the intellectual equivalent to mining the road and running away; he, as far as I know, -invented- that hypothetical, he’s the one who set it up as the ultimate butcher block for non-utilitarian ethical systems.
“Some hypotheticals must be fought”, in this context, just means “That hypothetical is dangerous”. It isn’t, really. It just requires giving up a single falsehood:
That knowing the truth always makes you better off. That that which can be destroyed by the truth, should be.
He already implicitly accepts that lesson; his endless fiction of secret societies keeping dangerous knowledge from the rest of society demonstrate this. The truth doesn’t always make things better. The truth is a very amoral creature; it doesn’t care if things are made better, or worse, it just is. To call -that- a dangerous idea is just stubbornness.
Not to say there -isn’t- danger in that post, but it is not, in fact, from the hypothetical.
We may disagree about what it means to “disagree”.
Eliezer’s complete response to your original posting was:
Would you kill babies if it was intrinsically the right thing to do? If not, under what other circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?
EDIT IN RESPONSE: My intended point had been that sometimes you do have to fight the hypothetical.
This, you take as evidence that he is “bothered on a fundamental level”, and you imply that this being “bothered on a fundamental level”, whatever that is, is evidence that he is wrong and should just give up the “simple falsehood” that truth is desirable.
This is argument by trying to bother people and claiming victory when you judge them to be bothered.
Since my argument in this case is that people can be “bothered”, then yes, it would be a victory.
However, since as far as I know Eliezer didn’t claim to be “unbotherable”, that doesn’t make Eliezer wrong, at least within the context of that discussion. Eliezer didn’t disagree with me, he simply refused the legitimacy of the hypothetical.
I ’ve notice that problem, but I think it is a bit dramatic to call it rationality breaking. I think it’s more of a problem of calling two things, the winning thing amd the truth seeking thing, by one name.
Do you really think there’s a strong firewall in the minds of most of this community between the two concepts?
More, do you think the word “rationality”, in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one’s identity?
Eliezer’s sequences certainly don’t treat the two ideas as distinct. Indeed, if they did, we’d be calling “the winning thing” by its proper name, pragmatism.
More, do you think the word “rationality”, in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one’s identity?
Which values am I supposed to answere that by? Obviously it would be bad by e rationality, but it keeps going because i rationality brings benefits to people who can create a united front against the Enemy,
That presumes an enemy. If deliberate, the most likely candidate for the enemy in this case, to my eyes, would be the epistemological rationalists themselves.
QM: Having strong positive beliefs on the subject would be not-even-wrong. Ruling out some is much less so. And that’s what he did. Note, I came to the same conclusion long before.
MIRI: It’s not uncritically accepted on LW more than you’d expect given who runs the joint.
Identity: If you’re not letting it trap you by thinking it makes you right, if you’re not letting it trap you by thinking it makes others wrong, then what dangers are you thinking of? People will get identities. This particular one seems well-suited to mitigating the dangers of identities.
Don’t need to posit crazy things, just think about selection bias—are the sorts of people that tend to become rationalist randomly sampled from the population? If not, why wouldn’t there be blind spots in such people just based on that?
Yes, but if I get the idea right, it is to learn to think in a self-correcting, self-improving way. For example, maybe Kanazawa is right in intelligence suppressing instincts / common sense, but a consistent application of rationality sooner or later would lead to discovering it and forming strategies to correct it.
For this reason, it is more of the rules (of self-correction, self-improvement, self-updating sets of beliefs) than the people. What kinds of truths would be potentially invisible to a self-correcting observationalist ruleset even if this was practiced by all kinds of people?
Just pick any of a large set of things the LW-sphere gets consistently wrong. You can’t separate the “ism” from the people (the “ists”), in my opinion. The proof of the effectiveness of the “ism” lies in the “ists”.
Which things are you thinking of?
A lot of opinions much of LW inherited uncritically from EY, for example. That isn’t to say that EY doesn’t have many correct opinions, he certainly does, but a lot of his opinions are also idiosyncratic, weird, and technically incorrect.
As is true for most of us. The recipe here is to be widely read (LW has a poor scholarship problem too). Not moving away from EY’s more idiosynchratic opinions is sort of a bad sign for the “ism.”
Could you mention some of the specific beliefs you think are wrong?
Having strong opinions on QM interpretations is “not even wrong.”
LW’s attitude on B is, at best, “arguable.”
Donating to MIRI as an effective use of money is, at best, “arguable.”
LW consequentialism is, at best, “arguable.”
Shitting on philosophy.
Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.
etc.
What I personally find valuable is “adapting the rationalist kung fu stance” for certain purposes.
Thank you.
B?
Bayesian.
I read that “B” and assumed that you had a reason for not spelling it out, so I concluded that you meant Basilisk.
Sorry, bad habit, I guess.
[Edited formatting] Strongly agree. http://lesswrong.com/lw/huk/emotional_basilisks/ is an experiment I ran which demonstrates the issue. Eliezer was unable to -consider- the hypothetical; it “had” to be fought.
The reason being, the hypothetical implies a contradiction in rationality as Eliezer defines it; if rationalism requires atheism, and atheism doesn’t “win” as well as religion, then the “rationality is winning” definition Eliezer uses breaks; suddenly rationality, via winning, can require irrational behavior. Less Wrong has a -massive- blind spot where rationality is concerned; for a web site which spends a significant amount of time discussing how to update “correctness” algorithms, actually posing challenges to “correctness” algorithms is one of the quickest ways to shut somebody’s brain down and put them in a reactionary mode.
It seems to me that he did consider your hypothetical, and argued that it should be fought. I agree: your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, “Suppose P were true? Then P would be true!”
BTW, you never answered his answer. Should I conclude that you are unable to consider his answer?
Eliezer also has Harry Potter in MoR withholding knowledge of the True Patronus from Dumbledore, because he realises that Dumbledore would not be able to cast it, and would no longer be able to cast the ordinary Patronus.
Now, he has a war against the Dark Lord to fight, and cannot take the time and risk of trying to persuade Dumbledore to an inner conviction that death is a great evil in order to enable him to cast the True Patronus. It might be worth pursuing after winning that war, if they both survive.
All this has a parallel with your hypothetical.
The hypothetical (P) is used to get people to draw some conclusions from it. These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P. Thus, all hypotheticals can be described, using your reasoning, as “Suppose P were true? Then P would be true!”
Furthermore, that also means “given Euclid’s premises, the sum of the angles of a triangle is 180 degrees” is a type of “Suppose P were true? Then P would be true!”—it begins with a P (Euclid’s premises) and concludes something that is logically equivalent to P.
I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as “Suppose P would be true? Then P would be true!” This makes OW’s hypothetical legitimate.
The argument has to go some distance. OrphanWilde is simply writing his hypothesis into his conclusion.
His hypothetical is “suppose atheism doesn’t win”. His conclusion is not “then atheism doesn’t win”, so he’s not writing his hypothesis into his conclusion. Rather, his conclusion is “then rationality doesn’t mean what one of your other premises says it means”. That is not saying P and concluding P; it is saying P and concluding something logically equivalent to P.
But that would be a misleading description.
Of course it’s a misleading description, that’s my point. RK said that OW’s post was “Suppose P would be true? Then P would be true!” His reason for saying that, as far as I could tell, is that the conclusions of the hypothetical were logically implied by the hypothetical. I don’t buy that.
While the MoR example is a good one, don’t bother defending Eliezer’s response to the linked post. “Something bad is now arbitrarily good, what do you do?” is a poor strawman to counter “Two good things are opposed to each other in a trade space, how do you optimize?”
Don’t get me wrong, I like most of what Eliezer has put out here on this site, but it seems that he gets wound up pretty easily and off the cuff comments from him aren’t always as well reasoned as his main posts. To allow someone to slide based on the halo effect on a blog about rationality is just wrong. Calling people out when they do something wrong—and being civil about it—is constructive, and let’s not forget it’s in the name of the site.
OW’s linked post still looks to me more like “Two good things are hypothetically opposed to each other because I arbitrarily say so.”
If it isn’t worth trying to persuade (whoever), he shouldn’t have commented in the first place. There are -lots- of posts that go through Less Wrong. -That- one bothered him. Bothered him on a fundamental level.
As it was intended to.
I’ll note that it bothered you too. It was intended to.
And the parallel is… apt, although probably not in the way that you think. I’m not Dumbledore, in this parallel.
As for his question? It’s not meant for me. I wouldn’t agonize over the choice, and no matter what decision I made, I wouldn’t feel bad about it afterwards. I have zero issue considering the hypothetical, and find it an inelegant and blunt way of pitting two moral absolutes against one another in an attempt to force somebody else to admit to an ethical hierarchy. The fact that Eliezer himself described the baby eater hypothetical as one which must be fought is the intellectual equivalent to mining the road and running away; he, as far as I know, -invented- that hypothetical, he’s the one who set it up as the ultimate butcher block for non-utilitarian ethical systems.
“Some hypotheticals must be fought”, in this context, just means “That hypothetical is dangerous”. It isn’t, really. It just requires giving up a single falsehood:
That knowing the truth always makes you better off. That that which can be destroyed by the truth, should be.
He already implicitly accepts that lesson; his endless fiction of secret societies keeping dangerous knowledge from the rest of society demonstrate this. The truth doesn’t always make things better. The truth is a very amoral creature; it doesn’t care if things are made better, or worse, it just is. To call -that- a dangerous idea is just stubbornness.
Not to say there -isn’t- danger in that post, but it is not, in fact, from the hypothetical.
Ah. People disagreeing prove you right.
We may disagree about what it means to “disagree”.
Eliezer’s complete response to your original posting was:
This, you take as evidence that he is “bothered on a fundamental level”, and you imply that this being “bothered on a fundamental level”, whatever that is, is evidence that he is wrong and should just give up the “simple falsehood” that truth is desirable.
This is argument by trying to bother people and claiming victory when you judge them to be bothered.
Since my argument in this case is that people can be “bothered”, then yes, it would be a victory.
However, since as far as I know Eliezer didn’t claim to be “unbotherable”, that doesn’t make Eliezer wrong, at least within the context of that discussion. Eliezer didn’t disagree with me, he simply refused the legitimacy of the hypothetical.
I ’ve notice that problem, but I think it is a bit dramatic to call it rationality breaking. I think it’s more of a problem of calling two things, the winning thing amd the truth seeking thing, by one name.
Do you really think there’s a strong firewall in the minds of most of this community between the two concepts?
More, do you think the word “rationality”, in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one’s identity?
Eliezer’s sequences certainly don’t treat the two ideas as distinct. Indeed, if they did, we’d be calling “the winning thing” by its proper name, pragmatism.
Which values am I supposed to answere that by? Obviously it would be bad by e rationality, but it keeps going because i rationality brings benefits to people who can create a united front against the Enemy,
That presumes an enemy. If deliberate, the most likely candidate for the enemy in this case, to my eyes, would be the epistemological rationalists themselves.
I was thinking of the fundies
I don’t think that’s argued. It’s also worth noting that the majority of MIRI’s funding over it’s history comes from a theist.
Well...
QM: Having strong positive beliefs on the subject would be not-even-wrong. Ruling out some is much less so. And that’s what he did. Note, I came to the same conclusion long before.
MIRI: It’s not uncritically accepted on LW more than you’d expect given who runs the joint.
Identity: If you’re not letting it trap you by thinking it makes you right, if you’re not letting it trap you by thinking it makes others wrong, then what dangers are you thinking of? People will get identities. This particular one seems well-suited to mitigating the dangers of identities.
Others: more clarification required
I think there’s plenty of criticism voiced about that concept on LW and there are articles advocating to keep one’s identity small.
And yet...
From time to time people use the label aspiring rationalist but I don’t think a majority of people on LW do.