If we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth.
And if you play the lottery long enough, you’ll eventually win. When your goal is to find something, approach usually works better than avoidance. This is especially true for learning—I remember reading a book where a seminar presenter described an experiment he did in his seminars, of sending a volunteer out of the room while the group picked an object in the room.
After the volunteer returned, their job was to find the object and a second volunteer would either ring a bell when they got closer or further away. Most of the time, a volunteer receiving only negative feedback gives up in disgust after several minutes of frustration, while the people receiving positive feedback usually identify the right object in a fraction of the time.
In effect, learning what something is NOT only negligibly decreases the search space, despite it still being “less wrong”.
(Btw, I suspect you were downvoted because it’s hard to tell exactly what position you’re putting forth—some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way. I’m also not clear how you get from the other points to “the ultimate way to be less wrong is radical skepticism”, unless you mean lesswrong.com-style less wrongness, rather than more-rightness. So, the overall effect is more than a little confusing to me, though I personally didn’t downvote you for it.)
Thanks, pjeby, I can see how it might be confusing what I am advocating. I’ve edited the sentence you quote to show that it is a view I am arguing against, and which seems implicit in an approach focused on debiasing.
In effect, learning what something is NOT only negligibly decreases the search space, despite it still being “less wrong”.
Yes, this is exactly the point I was making.
Btw, I suspect you were downvoted because it’s hard to tell exactly what position you’re putting forth—some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way.
Rather than trying to explain my previous post, I think I’ll try to summarize my view from scratch.
The project of “less wrong” seem to be more about how to avoid cognitive and epistemological errors, than about how to achieve cognitive and epistemological successes.
Now, in a sense, both an error and a success are “wrong,” because even what seems like a success is unlikely to be completely true. Take, for instance, the success of Newton’s physics, even though it was later corrected by Einstein’s physics.
Yet I think that even though Newton’s physics is “less wrong” than classical mechanics, I think this is a trivial sense which might mislead us. Cognitively focusing on being “less wrong” without sufficiently developed criteria for how we should formulate or recognize reasonable beliefs will lead to underconfidence, stifled creativity, missed opportunities, and eventually radical skepticism as a reductio ad absurdam. Darwin figured out his theory of evolution by studying nature, not (merely) by studying the biases of creationists or other biologists.
Being “less wrong” is a trivially correct description of what occurs in rationality, but I argue that focusing on being “less wrong” is not a complete way to actually practice rationality from the inside, at least, not a rationality that hopes to discover any novel or important things.
Of course, nobody in Overcoming Bias or LessWrong actually thinks that debiasing is sufficient for rationality. Nevertheless, for some reason or another, there is an imbalance of material focusing on avoiding failure modes, and less on seeking success modes.
And if you play the lottery long enough, you’ll eventually win. When your goal is to find something, approach usually works better than avoidance. This is especially true for learning—I remember reading a book where a seminar presenter described an experiment he did in his seminars, of sending a volunteer out of the room while the group picked an object in the room.
After the volunteer returned, their job was to find the object and a second volunteer would either ring a bell when they got closer or further away. Most of the time, a volunteer receiving only negative feedback gives up in disgust after several minutes of frustration, while the people receiving positive feedback usually identify the right object in a fraction of the time.
In effect, learning what something is NOT only negligibly decreases the search space, despite it still being “less wrong”.
(Btw, I suspect you were downvoted because it’s hard to tell exactly what position you’re putting forth—some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way. I’m also not clear how you get from the other points to “the ultimate way to be less wrong is radical skepticism”, unless you mean lesswrong.com-style less wrongness, rather than more-rightness. So, the overall effect is more than a little confusing to me, though I personally didn’t downvote you for it.)
Thanks, pjeby, I can see how it might be confusing what I am advocating. I’ve edited the sentence you quote to show that it is a view I am arguing against, and which seems implicit in an approach focused on debiasing.
Yes, this is exactly the point I was making.
Rather than trying to explain my previous post, I think I’ll try to summarize my view from scratch.
The project of “less wrong” seem to be more about how to avoid cognitive and epistemological errors, than about how to achieve cognitive and epistemological successes.
Now, in a sense, both an error and a success are “wrong,” because even what seems like a success is unlikely to be completely true. Take, for instance, the success of Newton’s physics, even though it was later corrected by Einstein’s physics.
Yet I think that even though Newton’s physics is “less wrong” than classical mechanics, I think this is a trivial sense which might mislead us. Cognitively focusing on being “less wrong” without sufficiently developed criteria for how we should formulate or recognize reasonable beliefs will lead to underconfidence, stifled creativity, missed opportunities, and eventually radical skepticism as a reductio ad absurdam. Darwin figured out his theory of evolution by studying nature, not (merely) by studying the biases of creationists or other biologists.
Being “less wrong” is a trivially correct description of what occurs in rationality, but I argue that focusing on being “less wrong” is not a complete way to actually practice rationality from the inside, at least, not a rationality that hopes to discover any novel or important things.
Of course, nobody in Overcoming Bias or LessWrong actually thinks that debiasing is sufficient for rationality. Nevertheless, for some reason or another, there is an imbalance of material focusing on avoiding failure modes, and less on seeking success modes.