It never was so far. Maybe we should start encouraging new posters to find good ways of coping with such feelings instead of shielding them from more and more “political” subjects?
We used to be stronger as a community.
In the real world the inability to avoid mindkilling will cripple anyone’s rationalist dojo.
A while ago I made a suggestion for a poll which interrogated LW users’ beliefs in subjects deemed to be mindkilling. There were two reasons for this: to map the overall space of ideas where it takes place, and to look for areas which turned out to be overwhelmingly one-sided.
I go to great pains to think dispassionately about things, (as, I imagine, do a lot of LWers), but there are still some subjects which I know I can’t think about objectively. More worryingly, I wonder what subjects I don’t notice I can’t think about objectively.
More worryingly, I wonder what subjects I don’t notice I can’t think about objectively.
One warning sign is attributing disagreement with your views on a subject to “bias”, and then engaging in armchair speculation about the psychological defects that must be responsible for this bias. For an example, see the article linked in the original posting, and almost the whole of this thread.
An especially egregious variation on this theme is evolutionary psychological speculation. I speculate that people do this because in the ancestral environment your audience wouldn’t call you out on it if you came up with a fully general explanation of something and asserted it confidently as long as your audience already agreed with your conclusion.
Precisely. E.g. the charity diversification disagreement I ran into here several times (even before I formed an opinion on AI stuff). I said, on several occasions, that the non-diversification result in larger rewards for finding and exploiting deficiencies in charity ranking algorithms employed, I even provided a clear example from my area of expertise of how the targets respond to aiming methods. Nobody has ever even tried to refute this argument, or even state that the effect of such is not strong enough, or something. Every single time someone just posts a reference to some fallacy which is entirely irrelevant to my argument, and asserts that it is the cause of my view, repeatedly (and that typically gets upvotes). One gets more engaging discussion simply asserting that you guys are wrong (no explanation given), than providing a well defined argument, because the argument itself doesn’t make a damn difference unless it is structured in the format of ‘assert a bias’ game (whenever I get upvotes being contrarian, that’s usually really shitty arguments following the assert a bias format rather than there is such and such mechanism format)
That’s quite a sensitive test, though. I’m trying to make my views unbiased. If I succeed, someone who still exhibits a greater amount of bias will either disagree with me, or I’ll disagree with their reasoning.
Well, you might have different priors, leading to different posterior beliefs from the same data; or you might have different values, leading to different decisions or policy prescriptions from the same descriptive beliefs.
(One might expect that a person raised in a large close-knit extended working-class immigrant family might have different values regarding economics than a person raised in a small individualistic nuclear middle-class ethnic-majority family, for instance.)
Note that I said someone who is more biased in an arena will disagree with me, not that someone who disagreed with me in an arena was exhibiting more bias.
In the real world a dojo (rationalist or otherwise) that anyone can walk into at any time and join in any of the exercises without any filtering or guidance or partnering is pretty much guaranteed to end up crippled.
Politics is the mindkiller is the mindkiller.
“Politics is the mindkiller” is politics.
Yes I guess I can agree with that.
No, I meant the subject of markets. I’d think of it as a mindkilling subject IRL, but not here.
It never was so far. Maybe we should start encouraging new posters to find good ways of coping with such feelings instead of shielding them from more and more “political” subjects?
We used to be stronger as a community.
In the real world the inability to avoid mindkilling will cripple anyone’s rationalist dojo.
A while ago I made a suggestion for a poll which interrogated LW users’ beliefs in subjects deemed to be mindkilling. There were two reasons for this: to map the overall space of ideas where it takes place, and to look for areas which turned out to be overwhelmingly one-sided.
I go to great pains to think dispassionately about things, (as, I imagine, do a lot of LWers), but there are still some subjects which I know I can’t think about objectively. More worryingly, I wonder what subjects I don’t notice I can’t think about objectively.
One warning sign is attributing disagreement with your views on a subject to “bias”, and then engaging in armchair speculation about the psychological defects that must be responsible for this bias. For an example, see the article linked in the original posting, and almost the whole of this thread.
An especially egregious variation on this theme is evolutionary psychological speculation. I speculate that people do this because in the ancestral environment your audience wouldn’t call you out on it if you came up with a fully general explanation of something and asserted it confidently as long as your audience already agreed with your conclusion.
Or for that matter most of the sequences.
Precisely. E.g. the charity diversification disagreement I ran into here several times (even before I formed an opinion on AI stuff). I said, on several occasions, that the non-diversification result in larger rewards for finding and exploiting deficiencies in charity ranking algorithms employed, I even provided a clear example from my area of expertise of how the targets respond to aiming methods. Nobody has ever even tried to refute this argument, or even state that the effect of such is not strong enough, or something. Every single time someone just posts a reference to some fallacy which is entirely irrelevant to my argument, and asserts that it is the cause of my view, repeatedly (and that typically gets upvotes). One gets more engaging discussion simply asserting that you guys are wrong (no explanation given), than providing a well defined argument, because the argument itself doesn’t make a damn difference unless it is structured in the format of ‘assert a bias’ game (whenever I get upvotes being contrarian, that’s usually really shitty arguments following the assert a bias format rather than there is such and such mechanism format)
That’s quite a sensitive test, though. I’m trying to make my views unbiased. If I succeed, someone who still exhibits a greater amount of bias will either disagree with me, or I’ll disagree with their reasoning.
Well, you might have different priors, leading to different posterior beliefs from the same data; or you might have different values, leading to different decisions or policy prescriptions from the same descriptive beliefs.
(One might expect that a person raised in a large close-knit extended working-class immigrant family might have different values regarding economics than a person raised in a small individualistic nuclear middle-class ethnic-majority family, for instance.)
Note that I said someone who is more biased in an arena will disagree with me, not that someone who disagreed with me in an arena was exhibiting more bias.
In the real world a dojo (rationalist or otherwise) that anyone can walk into at any time and join in any of the exercises without any filtering or guidance or partnering is pretty much guaranteed to end up crippled.