Insofar as your answer makes predictions about how actual “rationalists” behave, it would seem to be at least partly falsified: empirically, it turns out that many rationalists do not respond well to that particular suggestion (“modify chickens to not feel pain”).
(The important thing to note, about the above link, isn’t so much that there are disagreements with the proposal, but that the reasons given for those disagreements are fairly terrible—they are mostly non sequiturs, with a dash of bad logic thrown in. This would seem to more closely resemble the way you describe a “stereotypical non-rationalist” behaving than a “stereotypical rationalist”.)
In our argument in the comments to my post on zetetic explanations, I was a bit worried about pushing back too hard socially. I had a vague sense that there was something real and bad going on that your behavior was a legitimate immune response to, and that even though I thought and continue to think that I was a false positive, it seemed pretty bad to contribute to marginalization of one of the only people visibly upset about some sort of hard-to-put-my-finger-on shoddiness going on. It’s very important to the success of an epistemic community to have people sensing things like this, and promote that sort of alarm.
I’ve continued to try to track this, and I can now see somewhat more clearly a really sketchy pattern, which you’re one of the few people to consistently call out when it happens. This comment is a good example. It seems like there’s a tendency to conflate the stated ambitions and actual behavior of ingroups like Rationalists and EAs, when we wouldn’t extend this courtesy to the outgroup, in a way that subtly shades corrective objections as failures to get with the program.
This kind of thing is insidious, and can be done by well-meaning people. While I still think my zetetic explanation post was a different sort of slackness, there was a time when I’d have written posts like Donald Hobson’s, and I wasn’t intentionally trying to fool anyone. I was just … a certain flavor of enthusiastic and hopeful that gets a pass when and only when it flatters the ingroup’s prejudices.
I think it’s helpful and important for you to continue to point out object-level errors like this one, but it’s also important to track which errors seem like part of a pattern of motivated error, and which seem to be mere mistakes. The former class seems much more dangerous to me, since such errors are correlated.
Thank you for the encouragement, and I’m glad you’ve found value in my commentary.
… it’s also important to track which errors seem like part of a pattern of motivated error, and which seem to be mere mistakes. The former class seems much more dangerous to me, since such errors are correlated.
I agree with this as an object-level policy / approach, but I think not quite for the same reason as yours.
It seems to me that the line between “motivated error” and “mere mistake” is thin, and hard to locate, and possibly not actually existent. We humans are very good at self-deception, after all. Operating on the assumption that something can be identified as clearly being a “mere mistake” (or, conversely, as clearly being a “motivated error”) is dangerous.
That said, I think that there is clearly a spectrum, and I do endorse tracking at least roughly in which region of the spectrum any given case lies, because doing so creates some good incentives (i.e., it avoids disincentivizing post-hoc honesty). On the other hand, it also creates some bad incentives, e.g. the incentive for the sort of self-deception described above. Truthfully, I don’t know what the optimal approach is, here. Constant vigilance against any failures in this whole class is, however, warranted in any case.
I agree that not all rationalists would want wireheaded chickens, maybe they don’t care about chicken suffering at all. I also agree that you sometimes see bad logic and non-sequiters in the rationalist community. The non rationalist, motivated, emotion driven thinking, is the way that humans think by default. The rationalist community is trying to think a different way, sometimes successfully. Illustrating a junior rationalist having an off day and doing something stupid doesn’t illuminate the concept of rationality, the way that seeing a beginner juggler drop balls doesn’t show you what juggling is.
Insofar as your answer makes predictions about how actual “rationalists” behave, it would seem to be at least partly falsified: empirically, it turns out that many rationalists do not respond well to that particular suggestion (“modify chickens to not feel pain”).
(The important thing to note, about the above link, isn’t so much that there are disagreements with the proposal, but that the reasons given for those disagreements are fairly terrible—they are mostly non sequiturs, with a dash of bad logic thrown in. This would seem to more closely resemble the way you describe a “stereotypical non-rationalist” behaving than a “stereotypical rationalist”.)
In our argument in the comments to my post on zetetic explanations, I was a bit worried about pushing back too hard socially. I had a vague sense that there was something real and bad going on that your behavior was a legitimate immune response to, and that even though I thought and continue to think that I was a false positive, it seemed pretty bad to contribute to marginalization of one of the only people visibly upset about some sort of hard-to-put-my-finger-on shoddiness going on. It’s very important to the success of an epistemic community to have people sensing things like this, and promote that sort of alarm.
I’ve continued to try to track this, and I can now see somewhat more clearly a really sketchy pattern, which you’re one of the few people to consistently call out when it happens. This comment is a good example. It seems like there’s a tendency to conflate the stated ambitions and actual behavior of ingroups like Rationalists and EAs, when we wouldn’t extend this courtesy to the outgroup, in a way that subtly shades corrective objections as failures to get with the program.
This kind of thing is insidious, and can be done by well-meaning people. While I still think my zetetic explanation post was a different sort of slackness, there was a time when I’d have written posts like Donald Hobson’s, and I wasn’t intentionally trying to fool anyone. I was just … a certain flavor of enthusiastic and hopeful that gets a pass when and only when it flatters the ingroup’s prejudices.
I think it’s helpful and important for you to continue to point out object-level errors like this one, but it’s also important to track which errors seem like part of a pattern of motivated error, and which seem to be mere mistakes. The former class seems much more dangerous to me, since such errors are correlated.
Thank you for the encouragement, and I’m glad you’ve found value in my commentary.
I agree with this as an object-level policy / approach, but I think not quite for the same reason as yours.
It seems to me that the line between “motivated error” and “mere mistake” is thin, and hard to locate, and possibly not actually existent. We humans are very good at self-deception, after all. Operating on the assumption that something can be identified as clearly being a “mere mistake” (or, conversely, as clearly being a “motivated error”) is dangerous.
That said, I think that there is clearly a spectrum, and I do endorse tracking at least roughly in which region of the spectrum any given case lies, because doing so creates some good incentives (i.e., it avoids disincentivizing post-hoc honesty). On the other hand, it also creates some bad incentives, e.g. the incentive for the sort of self-deception described above. Truthfully, I don’t know what the optimal approach is, here. Constant vigilance against any failures in this whole class is, however, warranted in any case.
I agree that not all rationalists would want wireheaded chickens, maybe they don’t care about chicken suffering at all. I also agree that you sometimes see bad logic and non-sequiters in the rationalist community. The non rationalist, motivated, emotion driven thinking, is the way that humans think by default. The rationalist community is trying to think a different way, sometimes successfully. Illustrating a junior rationalist having an off day and doing something stupid doesn’t illuminate the concept of rationality, the way that seeing a beginner juggler drop balls doesn’t show you what juggling is.