That it’s not aimed at being “more right”—which is not at all the same as being less wrong.
I’ve also had mixed feelings about the concept of being “less wrong.” Anyone else?
Of course, it is harder to identify and articulate what is wrong than what is right: we know many ways of thinking that lead away from truth, but it harder to know when ways of thinking lead toward the truth. So the phrase “less wrong” might merely be an acknowledgment of fallibilism. All our ideas are riddled with mistakes, but it’s possible to make less mistakes or less egregious mistakes.
Yet “less wrong” and “overcoming bias” sound kind of like “playing to not lose,” rather than “playing to win.” There is much more material on these projects about how to avoid cognitive and epistemological errors, rather than about how to achieve cognitive and epistemological successes. Eliezer’s excellent post on underconfidence might help us protect an epistemological success once we somehow find one, and protect it even from our own great knowledge of biases, yet the debiasing program of LessWrong and Overcoming Bias is not optimal for showing us how to achieve such successes in the first place.
The idea might be that if we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth. Yet without any basis for moving towards the truth, we will probably just run into even more falsehood, because there are exponentially more possible crazy thoughts than sane thoughts. Process of elimination is really only good for solving certain types of problems, where the right answer is among our options and the number of false options to eliminate is finite and manageable.
If we are in search of a Holy Grail, we need a better plan than being able to identify all the things that are not the Holy Grail. Knowing that an African swallow is not a Holy Grail will certainly help us not not find the true Holy Grail because we erroneously mistake a bird for it, but it tells us absolutely nothing about where to actually look for the Holy Grail.
The ultimate way to be “less wrong” is radical skepticism. As a fallibilist, I am fully aware that we may never know when or if we are finding the truth, but I do think we can use heuristic to move towards it, rather than merely trying to move away from falsehood and hoping we bump into the truth backwards. That’s why I’ve been writing about heuristic here and here, and why I am glad to see Alicorn writing about heuristics to achieve procedural knowledge.
For certain real-world projects that shall-not-be-named to succeed, we will need to have some great cognitive and epistemological successes, not merely avoid failures.
If we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth.
And if you play the lottery long enough, you’ll eventually win. When your goal is to find something, approach usually works better than avoidance. This is especially true for learning—I remember reading a book where a seminar presenter described an experiment he did in his seminars, of sending a volunteer out of the room while the group picked an object in the room.
After the volunteer returned, their job was to find the object and a second volunteer would either ring a bell when they got closer or further away. Most of the time, a volunteer receiving only negative feedback gives up in disgust after several minutes of frustration, while the people receiving positive feedback usually identify the right object in a fraction of the time.
In effect, learning what something is NOT only negligibly decreases the search space, despite it still being “less wrong”.
(Btw, I suspect you were downvoted because it’s hard to tell exactly what position you’re putting forth—some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way. I’m also not clear how you get from the other points to “the ultimate way to be less wrong is radical skepticism”, unless you mean lesswrong.com-style less wrongness, rather than more-rightness. So, the overall effect is more than a little confusing to me, though I personally didn’t downvote you for it.)
Thanks, pjeby, I can see how it might be confusing what I am advocating. I’ve edited the sentence you quote to show that it is a view I am arguing against, and which seems implicit in an approach focused on debiasing.
In effect, learning what something is NOT only negligibly decreases the search space, despite it still being “less wrong”.
Yes, this is exactly the point I was making.
Btw, I suspect you were downvoted because it’s hard to tell exactly what position you’re putting forth—some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way.
Rather than trying to explain my previous post, I think I’ll try to summarize my view from scratch.
The project of “less wrong” seem to be more about how to avoid cognitive and epistemological errors, than about how to achieve cognitive and epistemological successes.
Now, in a sense, both an error and a success are “wrong,” because even what seems like a success is unlikely to be completely true. Take, for instance, the success of Newton’s physics, even though it was later corrected by Einstein’s physics.
Yet I think that even though Newton’s physics is “less wrong” than classical mechanics, I think this is a trivial sense which might mislead us. Cognitively focusing on being “less wrong” without sufficiently developed criteria for how we should formulate or recognize reasonable beliefs will lead to underconfidence, stifled creativity, missed opportunities, and eventually radical skepticism as a reductio ad absurdam. Darwin figured out his theory of evolution by studying nature, not (merely) by studying the biases of creationists or other biologists.
Being “less wrong” is a trivially correct description of what occurs in rationality, but I argue that focusing on being “less wrong” is not a complete way to actually practice rationality from the inside, at least, not a rationality that hopes to discover any novel or important things.
Of course, nobody in Overcoming Bias or LessWrong actually thinks that debiasing is sufficient for rationality. Nevertheless, for some reason or another, there is an imbalance of material focusing on avoiding failure modes, and less on seeking success modes.
I’ve also had mixed feelings about the concept of being “less wrong.” Anyone else?
Of course, it is harder to identify and articulate what is wrong than what is right: we know many ways of thinking that lead away from truth, but it harder to know when ways of thinking lead toward the truth. So the phrase “less wrong” might merely be an acknowledgment of fallibilism. All our ideas are riddled with mistakes, but it’s possible to make less mistakes or less egregious mistakes.
Yet “less wrong” and “overcoming bias” sound kind of like “playing to not lose,” rather than “playing to win.” There is much more material on these projects about how to avoid cognitive and epistemological errors, rather than about how to achieve cognitive and epistemological successes. Eliezer’s excellent post on underconfidence might help us protect an epistemological success once we somehow find one, and protect it even from our own great knowledge of biases, yet the debiasing program of LessWrong and Overcoming Bias is not optimal for showing us how to achieve such successes in the first place.
The idea might be that if we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth. Yet without any basis for moving towards the truth, we will probably just run into even more falsehood, because there are exponentially more possible crazy thoughts than sane thoughts. Process of elimination is really only good for solving certain types of problems, where the right answer is among our options and the number of false options to eliminate is finite and manageable.
If we are in search of a Holy Grail, we need a better plan than being able to identify all the things that are not the Holy Grail. Knowing that an African swallow is not a Holy Grail will certainly help us not not find the true Holy Grail because we erroneously mistake a bird for it, but it tells us absolutely nothing about where to actually look for the Holy Grail.
The ultimate way to be “less wrong” is radical skepticism. As a fallibilist, I am fully aware that we may never know when or if we are finding the truth, but I do think we can use heuristic to move towards it, rather than merely trying to move away from falsehood and hoping we bump into the truth backwards. That’s why I’ve been writing about heuristic here and here, and why I am glad to see Alicorn writing about heuristics to achieve procedural knowledge.
For certain real-world projects that shall-not-be-named to succeed, we will need to have some great cognitive and epistemological successes, not merely avoid failures.
And if you play the lottery long enough, you’ll eventually win. When your goal is to find something, approach usually works better than avoidance. This is especially true for learning—I remember reading a book where a seminar presenter described an experiment he did in his seminars, of sending a volunteer out of the room while the group picked an object in the room.
After the volunteer returned, their job was to find the object and a second volunteer would either ring a bell when they got closer or further away. Most of the time, a volunteer receiving only negative feedback gives up in disgust after several minutes of frustration, while the people receiving positive feedback usually identify the right object in a fraction of the time.
In effect, learning what something is NOT only negligibly decreases the search space, despite it still being “less wrong”.
(Btw, I suspect you were downvoted because it’s hard to tell exactly what position you’re putting forth—some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way. I’m also not clear how you get from the other points to “the ultimate way to be less wrong is radical skepticism”, unless you mean lesswrong.com-style less wrongness, rather than more-rightness. So, the overall effect is more than a little confusing to me, though I personally didn’t downvote you for it.)
Thanks, pjeby, I can see how it might be confusing what I am advocating. I’ve edited the sentence you quote to show that it is a view I am arguing against, and which seems implicit in an approach focused on debiasing.
Yes, this is exactly the point I was making.
Rather than trying to explain my previous post, I think I’ll try to summarize my view from scratch.
The project of “less wrong” seem to be more about how to avoid cognitive and epistemological errors, than about how to achieve cognitive and epistemological successes.
Now, in a sense, both an error and a success are “wrong,” because even what seems like a success is unlikely to be completely true. Take, for instance, the success of Newton’s physics, even though it was later corrected by Einstein’s physics.
Yet I think that even though Newton’s physics is “less wrong” than classical mechanics, I think this is a trivial sense which might mislead us. Cognitively focusing on being “less wrong” without sufficiently developed criteria for how we should formulate or recognize reasonable beliefs will lead to underconfidence, stifled creativity, missed opportunities, and eventually radical skepticism as a reductio ad absurdam. Darwin figured out his theory of evolution by studying nature, not (merely) by studying the biases of creationists or other biologists.
Being “less wrong” is a trivially correct description of what occurs in rationality, but I argue that focusing on being “less wrong” is not a complete way to actually practice rationality from the inside, at least, not a rationality that hopes to discover any novel or important things.
Of course, nobody in Overcoming Bias or LessWrong actually thinks that debiasing is sufficient for rationality. Nevertheless, for some reason or another, there is an imbalance of material focusing on avoiding failure modes, and less on seeking success modes.
At least one person seems to think that this post is in error, and I would very much like to hear what might be wrong with it.