Never mind widgets. I overemphasized business in the original post. Any kind of reality check will do. Newton had a reality check, what about us? Not many accurate predictions here. Choose any real-world metric that suits you—just don’t degenerate into “what biases have you overcome today” soft-science bullshit.
I once joked that science has four levels, high to low: “this works”, “this is true”, “this sounds true”, “this sounds neat”. We here are still at number three, no?
I think you have too limited a picture of what searching for truth entails, and that we don’t have as great a difference between our views as you think.
Newton and Einstein used rationality to seek truth and bring unity to experience, not for practical results. But they were both smart enough to know they’d better check their results against experience, or they’d get the wrong answer and never be able to move further. If we’re smart, we’ll do the same, whether we’re after truth or whatever.
Someone once said there were two kinds of rich people—those who really like having luxury goods, and those for whom money is just a way to keep score. The same could apply to rationalists; there are those who want some specific practical goal or predictive ability, and there are others for whom the ability to achieve practical goals or make predictions is a way to keep score. Einstein was happy to hear his theory successfully predicted the path of light during an eclipse, I’m sure, but not because he was in it for the eclipse-light-predicting.
You’re right, we are more or less in agreement. The expression “to keep score” captures the topic perfectly. Pickup artists have attained a very accurate/predictive view of female mating psychology because they keep score. :-) I’d love to have something similarly objective for rationalism.
Newton and Einstein used rationality to seek truth and bring unity to experience, not for practical results. But they were both smart enough to know they’d better check their results against experience, or they’d get the wrong answer and never be able to move further.
In 1919, Sir Arthur Eddington led expeditions to Brazil and to the island of Principe, aiming to observe solar eclipses and thereby test an experimental prediction of Einstein’s novel theory of General Relativity. A journalist asked Einstein what he would do if Eddington’s observations failed to match his theory. Einstein famously replied: “Then I would feel sorry for the good Lord. The theory is correct.”
It seems like a rather foolhardy statement, defying the trope of Traditional Reality that experiment above all is sovereign. Einstein seems possessed of an arrogance so great that he would refuse to bend his neck and submit to Nature’s answer, as scientists must do. Who can know that the theory is correct, in advance of experimental test?
Yes. I’ve been a semi-regular reader of OCB for about a year. I think it’s an interesting blog. But have I learned anything useful from it? Has it made any practical difference in the choices I make, either day-to-day or longterm? The answer is no. Admittedly, this may be my own fault. But I recall a post, not too long ago, soliciting people’s feedback on “the most important thing you learned from OCB in the past year,” or something of that sort. And while there were lots of people excitedly posting about how much OCB has taught them, the examples they gave were along the lines of “I learned the power of fundamental attribution error!” or “I learned the importance of continually adjusting my priors!” with curiously few examples of real differences OCB made in anyone’s practical choices. This raises the question: if tweaking our rationality has no appreciable affect on anything, then how can we say we’re really tweaking our rationality at all? Perhaps we’re just swapping new explanations for fundamentally irrational processes that are far too buried and obscure to be accessible to us.
That said, I think things like the recent posts on akrasia are strong moves in the right direction. Intellectually interesting, but with easy to grasp real-world implications.
This sounds an awful lot like one of the examples I gave above. Ok, so you’re focused on “risk reduction” and “reducing akrasia.” So what’s that mean? You’ve decided to buckle-up, wear sunscreen, and not be so lazy? Can’t I get that from Reader’s Digest or my mom?
Telling people to buckle up is nothing special. Successfully persuading people to buckle up—helping people understand and fix the internal sources of error that stood in the way of doing so in the past—will save a life if you can do it enough.
The problem is that even though learning to identify and avoid certain biases will affect your behavior, there’s no easy way to articulate those effects. It comes mainly from things not done, not things done.
For instance, upon hearing a fallacious argument, being aware of its fallacies causes the hearer not to believe in it, where he previously would have. Or if he thinks something on his own—previously a bias would have caused him to think a certain thought, which would have led to a certain action. Now, having learned to identify the bias, he doesn’t even generate that thought, but instead another, which leads to him taking a different action. While these things do certainly have an effect, they’re too subtle to identify. You’re not going to know the thoughts you avoided (even if you can try to guess), only the ones you’ve actually thought.
I feel this has largely been the case for me. My behavior has certainly been affected because I now think more clearly. That, I’m pretty certain of. But can I give any concrete examples? I’m afraid not. The effect is on a too subtle of a level for me to properly observe. But that doesn’t mean there aren’t any concrete examples, it only means I can’t verbalize them.
This debate has already played out in attacking and defending Pragmatism.
A lot of the rubrics by which to judge whether or not rationalism wins or whether or not rationalism is an end in itself involve assigning meaning and value on a very abstract level. Eliezer’s posts outline a reductionist, materialist standpoint with some strong beliefs about following the links of causality. Rationalism follows, but rationalism isn’t going to prove itself true.
Deciding that rationalism is the best answer for your axiomatic belief system requires taking a metaphysical stand; I think that if you are looking for a definite metaphysical reason that you should practice rationalism, then you are interested in something that the practice of rationalism is not going to help much.
“Rationalism” as compared to what? Mysticism? Random decision-making? Of course rational behavior is going to be by far the best choice for achieving one’s particular ends. I wasn’t questioning the entire concept of rationalism, which clearly has been the driving force behind human progress for all of history. I was questioning how much making “tweaks”—the kind discussed here and on OCB—can do for us. Or have done for us. Put differently, is perseverating on rationality per se worth my time? Can anyone show that paying special attention to rationality has measurable results, controlling for IQ?
Mysticism and random decision making are both acceptable and highly successful methods of making decisions; most of human history has relied on those two… we still rely on them. If you are a consequentialist, you can ignore the process and just rate the outcome; who cares why nice hair is correlated with success -it just is! Why does democracy work?
What makes rationalism worth the time is probably your regard for the process itself or for its outcomes. If its the outcomes then you might want to consider other options; following your biases and desires almost blindly works out pretty well for most people.
Never mind widgets. I overemphasized business in the original post. Any kind of reality check will do. Newton had a reality check, what about us? Not many accurate predictions here. Choose any real-world metric that suits you—just don’t degenerate into “what biases have you overcome today” soft-science bullshit.
I once joked that science has four levels, high to low: “this works”, “this is true”, “this sounds true”, “this sounds neat”. We here are still at number three, no?
I think you have too limited a picture of what searching for truth entails, and that we don’t have as great a difference between our views as you think.
Newton and Einstein used rationality to seek truth and bring unity to experience, not for practical results. But they were both smart enough to know they’d better check their results against experience, or they’d get the wrong answer and never be able to move further. If we’re smart, we’ll do the same, whether we’re after truth or whatever.
Someone once said there were two kinds of rich people—those who really like having luxury goods, and those for whom money is just a way to keep score. The same could apply to rationalists; there are those who want some specific practical goal or predictive ability, and there are others for whom the ability to achieve practical goals or make predictions is a way to keep score. Einstein was happy to hear his theory successfully predicted the path of light during an eclipse, I’m sure, but not because he was in it for the eclipse-light-predicting.
You’re right, we are more or less in agreement. The expression “to keep score” captures the topic perfectly. Pickup artists have attained a very accurate/predictive view of female mating psychology because they keep score. :-) I’d love to have something similarly objective for rationalism.
According to EY,
A typo in the Yudkowsky’s article: Traditional Reality → Traditional Rationality.
Yes. I’ve been a semi-regular reader of OCB for about a year. I think it’s an interesting blog. But have I learned anything useful from it? Has it made any practical difference in the choices I make, either day-to-day or longterm? The answer is no. Admittedly, this may be my own fault. But I recall a post, not too long ago, soliciting people’s feedback on “the most important thing you learned from OCB in the past year,” or something of that sort. And while there were lots of people excitedly posting about how much OCB has taught them, the examples they gave were along the lines of “I learned the power of fundamental attribution error!” or “I learned the importance of continually adjusting my priors!” with curiously few examples of real differences OCB made in anyone’s practical choices. This raises the question: if tweaking our rationality has no appreciable affect on anything, then how can we say we’re really tweaking our rationality at all? Perhaps we’re just swapping new explanations for fundamentally irrational processes that are far too buried and obscure to be accessible to us.
That said, I think things like the recent posts on akrasia are strong moves in the right direction. Intellectually interesting, but with easy to grasp real-world implications.
Ob has changed people’s practical lives in some major ways. Not all of these are mine personally:
“I donated more money to anti aging, risk reduction, etc”
“I signed up for cryonics.”
“I wear a seatbelt in a taxi even when no one else does.”
“I stopped going to church but started hanging out socially with aspiring rationalists.”
“I decided rationality works and started writing down my goals and pathways to them.”
“I decided it’s important for me to think carefully about what my ultimate values are.”
Yes! Or even further, “I am now focusing my life on risk reduction and have significantly reduced akrasia in all facets of my life.”
This sounds an awful lot like one of the examples I gave above. Ok, so you’re focused on “risk reduction” and “reducing akrasia.” So what’s that mean? You’ve decided to buckle-up, wear sunscreen, and not be so lazy? Can’t I get that from Reader’s Digest or my mom?
Telling people to buckle up is nothing special. Successfully persuading people to buckle up—helping people understand and fix the internal sources of error that stood in the way of doing so in the past—will save a life if you can do it enough.
The problem is that even though learning to identify and avoid certain biases will affect your behavior, there’s no easy way to articulate those effects. It comes mainly from things not done, not things done.
For instance, upon hearing a fallacious argument, being aware of its fallacies causes the hearer not to believe in it, where he previously would have. Or if he thinks something on his own—previously a bias would have caused him to think a certain thought, which would have led to a certain action. Now, having learned to identify the bias, he doesn’t even generate that thought, but instead another, which leads to him taking a different action. While these things do certainly have an effect, they’re too subtle to identify. You’re not going to know the thoughts you avoided (even if you can try to guess), only the ones you’ve actually thought.
I feel this has largely been the case for me. My behavior has certainly been affected because I now think more clearly. That, I’m pretty certain of. But can I give any concrete examples? I’m afraid not. The effect is on a too subtle of a level for me to properly observe. But that doesn’t mean there aren’t any concrete examples, it only means I can’t verbalize them.
This debate has already played out in attacking and defending Pragmatism.
A lot of the rubrics by which to judge whether or not rationalism wins or whether or not rationalism is an end in itself involve assigning meaning and value on a very abstract level. Eliezer’s posts outline a reductionist, materialist standpoint with some strong beliefs about following the links of causality. Rationalism follows, but rationalism isn’t going to prove itself true.
Deciding that rationalism is the best answer for your axiomatic belief system requires taking a metaphysical stand; I think that if you are looking for a definite metaphysical reason that you should practice rationalism, then you are interested in something that the practice of rationalism is not going to help much.
“Rationalism” as compared to what? Mysticism? Random decision-making? Of course rational behavior is going to be by far the best choice for achieving one’s particular ends. I wasn’t questioning the entire concept of rationalism, which clearly has been the driving force behind human progress for all of history. I was questioning how much making “tweaks”—the kind discussed here and on OCB—can do for us. Or have done for us. Put differently, is perseverating on rationality per se worth my time? Can anyone show that paying special attention to rationality has measurable results, controlling for IQ?
Mysticism and random decision making are both acceptable and highly successful methods of making decisions; most of human history has relied on those two… we still rely on them. If you are a consequentialist, you can ignore the process and just rate the outcome; who cares why nice hair is correlated with success -it just is! Why does democracy work?
What makes rationalism worth the time is probably your regard for the process itself or for its outcomes. If its the outcomes then you might want to consider other options; following your biases and desires almost blindly works out pretty well for most people.