Note – When I first started planning this article I was hoping for more down-to-earth examples, but I struggled to find any.
This is a sign that you may be beating up a straw man (which is fun to write, but not as much fun to read). If your big insight doesn’t cash out in direct practical advice or illumination of previously confusing phenomena, be very suspicious.
Furthermore, I think you’ve chosen poor examples in the “perfect Bayesians do X” category. The reference to Aumann’s Agreement Theorem in the Bayesian Judo post was a joke, and the example from the comment wasn’t suggesting you naively implement it in real life.
These bugs aren’t fatal, but they’re good examples of why one’s first post ought to be published in the Discussion section rather than the top level (and promoted later, if everything checks out).
I actually deleted this post about 3 seconds after I first published it, and only put it back up here after a number of people asked me to. That post also had a number of people point out some more near-mode examples.
Ben gave three common far-mode fallacies that people make; I am not sure I agree with the one on changing our minds, but he makes valid points there, and his other two far-mode examples are pretty spot-on.
I suspect that the majority of fallacies that LWers commit are far-mode, simply because humans are naturally bad at far-mode thinking and LWers should be no different. The difference is that, unlike everyone else, LWers fail to compartmentalize and therefore flawed far-mode thinking has a higher potential to be dangerous.
So, real-world examples of far-mode fallacies: good, and something I would like to see more of; this is distinct from non-real-world example of fallacies (either near-mode or far-mode).
One of the examples I brought up in the depublished post was of motivation. That is, people investigate motivated people—and they can’t find any motivated people who don’t have methods. So, naturally, they go out to the (book)store and purchase some methods, expecting that wearing and using these methods will make them a motivated person. This seems like a case of the dressing like a winner fallacy that isn’t really a strawman, and does clear up some confusion regarding akrasia and the like.
(I did also recommend removing the mathematically rational section; I had concerns that it didn’t fit as well as the other two.)
This is a sign that you may be beating up a straw man (which is fun to write, but not as much fun to read). If your big insight doesn’t cash out in direct practical advice or illumination of previously confusing phenomena, be very suspicious.
Furthermore, I think you’ve chosen poor examples in the “perfect Bayesians do X” category. The reference to Aumann’s Agreement Theorem in the Bayesian Judo post was a joke, and the example from the comment wasn’t suggesting you naively implement it in real life.
Finally, you should be aware of the prior discussion here on this topic.
These bugs aren’t fatal, but they’re good examples of why one’s first post ought to be published in the Discussion section rather than the top level (and promoted later, if everything checks out).
I actually deleted this post about 3 seconds after I first published it, and only put it back up here after a number of people asked me to. That post also had a number of people point out some more near-mode examples.
Ben gave three common far-mode fallacies that people make; I am not sure I agree with the one on changing our minds, but he makes valid points there, and his other two far-mode examples are pretty spot-on.
I suspect that the majority of fallacies that LWers commit are far-mode, simply because humans are naturally bad at far-mode thinking and LWers should be no different. The difference is that, unlike everyone else, LWers fail to compartmentalize and therefore flawed far-mode thinking has a higher potential to be dangerous.
So, real-world examples of far-mode fallacies: good, and something I would like to see more of; this is distinct from non-real-world example of fallacies (either near-mode or far-mode).
One of the examples I brought up in the depublished post was of motivation. That is, people investigate motivated people—and they can’t find any motivated people who don’t have methods. So, naturally, they go out to the (book)store and purchase some methods, expecting that wearing and using these methods will make them a motivated person. This seems like a case of the dressing like a winner fallacy that isn’t really a strawman, and does clear up some confusion regarding akrasia and the like.
(I did also recommend removing the mathematically rational section; I had concerns that it didn’t fit as well as the other two.)