Well, it depends on what you mean by “rationality”. Here’s something I posted in 2014, slightly revised:
If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims:
Obtain a better model of the world by updating on the evidence of things unpredicted by your current model.
Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, and correct predictions enable goal-accomplishing actions. And the way to have correct beliefs is to update your beliefs when their predictions fail.
We can call these the rules of Bayes’ world, the world in which updating and prediction are effective at accomplishing human goals. But Bayes’ world is not the only imaginable world. What if we deny each of these premises and see what we get? Other than Bayes’ world, which other worlds might we be living in?
To be clear, I’m not talking about alternatives to Bayesian probability as a mathematical or engineering tool. I’m talking about imaginable worlds in which Bayesian probability is not a good model for human knowledge and action.
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra’s world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it.
In the world of heroic myth, it is not oracles (good predictors) but rather heroes and villains (strong-willed people) who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Oracles possess the truth to arbitrary precision, but they accomplish nothing by it. Heroes and villains come to their predicted triumphs or fates not by believing and making use of prediction, but by ignoring or defying it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the external world are relatively close to our priors; not much updating is needed there — but our goals are not known to us initially. In fact, we may be thoroughly deceived about what our goals are, or what satisfying them would look like.
We might consider this to be Buddha’s world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. In this world, when we choose actions that are unsatisfactory, it isn’t so much because we are acting on faulty beliefs about the external world, but because we are pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes’ world. Each of these models should relate prediction, action, and goals in different ways: We might imagine Lovecraft’s world (knowledge causes suffering), Qoheleth’s world (maybe similar to Buddha’s), Job’s world, or Nietzsche’s world.
Each of these models of the world — Bayes’ world, Cassandra’s world, Buddha’s world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes’ world, what evidence might suggest that we are actually in one of the others?
This is a perspective I hadn’t seen mentioned before and helps me understand why a friend of mine gives low value to the goal-oriented rationality material I’ve mentioned to him.
It’s worth noting that, from what I can tell at least (having not actually taken their courses), quite a bit of CFAR “rationality” training seems to deal with issues arising not directly from Bayesian math, but from characteristics of human minds and society.
Rationality takes extra time and effort, and most people can get by without it. It is easier to go with the flow—easier on your brain, easier on your social life, and easier on your pocketbook. And worse, even if you decide you like rationality, you can’t just tune into the rationality hour on TV and do what they say—you actually have to come up with your own rationality! It’s way harder than politics, religion, or even exercise.
It is hard, sometimes, to follow epistemic rationality when it seems in conflict with instrumental one. Like, when a friend and colleague cries me a river about her ongoing problems, I try to comfort her but also to forget the details, in case I betray her confidence next minute speaking to our other coworkers. Surely epistemic rationality requires committing information to memory as losslessly as possible? And yet I strive to remember the voice and not the words.
(A partial case of what people might mean by ‘rationality is cold’, I guess.)
What are the strongest arguments that you’ve seen against rationality?
Well, it depends on what you mean by “rationality”. Here’s something I posted in 2014, slightly revised:
If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims:
Obtain a better model of the world by updating on the evidence of things unpredicted by your current model.
Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, and correct predictions enable goal-accomplishing actions. And the way to have correct beliefs is to update your beliefs when their predictions fail.
We can call these the rules of Bayes’ world, the world in which updating and prediction are effective at accomplishing human goals. But Bayes’ world is not the only imaginable world. What if we deny each of these premises and see what we get? Other than Bayes’ world, which other worlds might we be living in?
To be clear, I’m not talking about alternatives to Bayesian probability as a mathematical or engineering tool. I’m talking about imaginable worlds in which Bayesian probability is not a good model for human knowledge and action.
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra’s world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it.
In the world of heroic myth, it is not oracles (good predictors) but rather heroes and villains (strong-willed people) who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Oracles possess the truth to arbitrary precision, but they accomplish nothing by it. Heroes and villains come to their predicted triumphs or fates not by believing and making use of prediction, but by ignoring or defying it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the external world are relatively close to our priors; not much updating is needed there — but our goals are not known to us initially. In fact, we may be thoroughly deceived about what our goals are, or what satisfying them would look like.
We might consider this to be Buddha’s world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. In this world, when we choose actions that are unsatisfactory, it isn’t so much because we are acting on faulty beliefs about the external world, but because we are pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes’ world. Each of these models should relate prediction, action, and goals in different ways: We might imagine Lovecraft’s world (knowledge causes suffering), Qoheleth’s world (maybe similar to Buddha’s), Job’s world, or Nietzsche’s world.
Each of these models of the world — Bayes’ world, Cassandra’s world, Buddha’s world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes’ world, what evidence might suggest that we are actually in one of the others?
This is a perspective I hadn’t seen mentioned before and helps me understand why a friend of mine gives low value to the goal-oriented rationality material I’ve mentioned to him.
Thank you very much for this post!
It’s worth noting that, from what I can tell at least (having not actually taken their courses), quite a bit of CFAR “rationality” training seems to deal with issues arising not directly from Bayesian math, but from characteristics of human minds and society.
Rationality takes extra time and effort, and most people can get by without it. It is easier to go with the flow—easier on your brain, easier on your social life, and easier on your pocketbook. And worse, even if you decide you like rationality, you can’t just tune into the rationality hour on TV and do what they say—you actually have to come up with your own rationality! It’s way harder than politics, religion, or even exercise.
“It’s cold-hearted.”
This isn’t actually a strong argument, but many people find it persuasive.
It applies to certain kinds of rationality but I don’t think it applies to rationality!CFAR or the rationality I see at LW events in Germany.
People discard artificial constructs after they are beaten a few times, and return to simply powering through.
It is hard, sometimes, to follow epistemic rationality when it seems in conflict with instrumental one. Like, when a friend and colleague cries me a river about her ongoing problems, I try to comfort her but also to forget the details, in case I betray her confidence next minute speaking to our other coworkers. Surely epistemic rationality requires committing information to memory as losslessly as possible? And yet I strive to remember the voice and not the words.
(A partial case of what people might mean by ‘rationality is cold’, I guess.)
You need to forget a fact lest you accidentally mention it?
Even an oblique reference, a vague reflection can be harmful.
Superstition hasn’t worked in the past, so it’s due to be right soon.