But “If you’re so rational, why ain’t you rich” is sneaky-good and similar enough to hitch a ride. It asks: maybe you aren’t rational enough? And suddenly a scale is introduced.
An interesting data point: those who are rich (powerful, successful with the appropriate sex, etc.) are usually those who are willing to accept unpleasant truths regarding what is required of them.
It is generally not necessary for such people to actually discover or work out those truths, since most of them are readily apparent, available in books or other educational material, and of course learnable via “hard knocks”.
So, the rationality that “wins” the most (in bang-for-the-buck terms) is not so much being a perfect Bayesian or smart reasoner, as it is in the willingness to accept potentially-unpleasant truths, including those that violate your most cherished ideals and preferences about the way the world should be.
(And unfortunately, the people who are most attracted by the idea of being right, are usually also the people least willing to admit they might be wrong.)
I doubt that’s all the winning that’s possible. They just leaped hurdle number one, non-delusion.
I’m just saying that leaping that one hurdle is sufficient for the vast majority of people to take huge steps forward in their results. Outside of attempts to advance science or technology, there are very few things in life that can’t be had with only that much “rationality”.
So, the rationality that “wins” the most (in bang-for-the-buck terms) is not so much being a perfect Bayesian or smart reasoner, as it is in the willingness to accept potentially-unpleasant truths, including those that violate your most cherished ideals and preferences about the way the world should be.
This is perfectly in line with the definition of epistemic rationality, that is, building an accurate map of reality regardless of the pleasantness of the ‘reality landscape’ that needs to be mapped.
A map that reflects some features of reality and doesn’t reflect others based on their pleasantness to the mapper is not accurate.
This is perfectly in line with the definition of epistemic rationality, that is, building an accurate map of reality regardless of the pleasantness of the ‘reality landscape’ that needs to be mapped.
That may well be, but in my experience people whose ideal is seeking for “Truth” often have a tendency to reject truths that don’t match their other ideals. Or, on the flip side, they acknowledge the truths but become bitter and cynical because actually acting upon those truths would violate their other ideals.
In other words, merely knowing the truth is not enough. It is accepting the truth—and acting on it—that is required.
(Subject, of course, to the usual caveat that instrumental rationalists do not require “the” truth, only usable models. Winning rationalists use whatever models produce results, no matter how ludicrous they sound or obviously “untruthful” they are.)
It’s not clear to me whether you mean that accepting models that “produce results” means you’ll arrive at actually true model, or that you think winners are willing to use obviously false approximations, or that you think winners believe falsely in order to win.
It’s not clear to me whether you mean that accepting models that “produce results” means you’ll arrive at actually true model, or that you think winners are willing to use obviously false approximations, or that you think winners believe falsely in order to win.
The first, I don’t really care about. Maybe the winner will get something “more true”, or maybe they’ll be talking phlogiston but still be able to light or put out fires and predict what will happen well enough under most circumstances.
The second is mainly what I mean—plenty of self-help techniques work, despite having ludicrous or awful theories of why/how they work. I see no point in people waiting for the theories to get better before they make progress.
As for the third, it depends on how you look at it. I think it is more accurate to say that winners are able to suspend disbelief, separating truth from usefulness.
But it’s important to understand what “suspending disbelief” actually means here. As I was explaining to a class yesterday, the difference between a confident state and a non-confident state is the number of simultaneous mental processes involved. (Subject to the disclaimer that everything I’m about to say here is not a “true” model, just a useful one!)
In a non-confident state, you run two processes: one to generate behavior, and the other one to limit it: self-critique, skepticism, analysis, etc. And it doesn’t matter what the target of that “critique process” is… the theory, the person teaching it to you, your own ability to learn, what other people are thinking of you while you do it, whatever. Makes no difference at all what the content of that critique process is, just that you have one.
Confidence, on the other hand, is just running the behavior-generating process. It’s literally “suspension of disbelief”, regardless of what the disbelief is targeted at. This is why so many self-help books urge suspension of disbelief while reading and trying the techniques they offer: it is in fact essential to being able to carry out any processes that make use of intuition or unconscious learning and computation.
(Because the conscious process ends up diverting needed resources or distracting one from properly attending to necessary elements of the ongoing experience.)
It’s also pretty essential to behaving in a confident way—when you’re confident, you’re not simultaneously generating behavior and critiquing; you fully commit to one or the other at any given moment.
Anyway, I think it is a confusion to call this “believing falsely”—it is the absence of something, not the presence of something: i.e., it is simply the mental hygiene of refraining from engaging in mental processes that would interfere with carrying out a desired course of action. Intentionally believing falsely doesn’t make any sense, but refraining from interfering with yourself “acting as if” a model is true, is an entirely different ball game.
The real purpose of the made-up theories found in self-help books is to give people a reason to let go of their disbeliefs and doubts: “I can’t do this”, “I’m no good at this”, “This stuff never works,” etc. If you can convince somebody to drop those other thoughts long enough to try something, you can get them to succeed. And as far as I have observed, this exact same principle is being taught by the self-help gurus, the marketing wizards, and even the pickup people.
And it’s probably a big part of why people think that being “too rational” is a success hazard… because it is, if you can’t let go of it when you’re trying to learn or do things that require unconscious competence.
An interesting data point: those who are rich (powerful, successful with the appropriate sex, etc.) are usually those who are willing to accept unpleasant truths regarding what is required of them.
It is generally not necessary for such people to actually discover or work out those truths, since most of them are readily apparent, available in books or other educational material, and of course learnable via “hard knocks”.
So, the rationality that “wins” the most (in bang-for-the-buck terms) is not so much being a perfect Bayesian or smart reasoner, as it is in the willingness to accept potentially-unpleasant truths, including those that violate your most cherished ideals and preferences about the way the world should be.
(And unfortunately, the people who are most attracted by the idea of being right, are usually also the people least willing to admit they might be wrong.)
I doubt that’s all the winning that’s possible. They just leaped hurdle number one, non-delusion.
I’m just saying that leaping that one hurdle is sufficient for the vast majority of people to take huge steps forward in their results. Outside of attempts to advance science or technology, there are very few things in life that can’t be had with only that much “rationality”.
This is perfectly in line with the definition of epistemic rationality, that is, building an accurate map of reality regardless of the pleasantness of the ‘reality landscape’ that needs to be mapped.
A map that reflects some features of reality and doesn’t reflect others based on their pleasantness to the mapper is not accurate.
That may well be, but in my experience people whose ideal is seeking for “Truth” often have a tendency to reject truths that don’t match their other ideals. Or, on the flip side, they acknowledge the truths but become bitter and cynical because actually acting upon those truths would violate their other ideals.
In other words, merely knowing the truth is not enough. It is accepting the truth—and acting on it—that is required.
(Subject, of course, to the usual caveat that instrumental rationalists do not require “the” truth, only usable models. Winning rationalists use whatever models produce results, no matter how ludicrous they sound or obviously “untruthful” they are.)
It’s not clear to me whether you mean that accepting models that “produce results” means you’ll arrive at actually true model, or that you think winners are willing to use obviously false approximations, or that you think winners believe falsely in order to win.
The first, I don’t really care about. Maybe the winner will get something “more true”, or maybe they’ll be talking phlogiston but still be able to light or put out fires and predict what will happen well enough under most circumstances.
The second is mainly what I mean—plenty of self-help techniques work, despite having ludicrous or awful theories of why/how they work. I see no point in people waiting for the theories to get better before they make progress.
As for the third, it depends on how you look at it. I think it is more accurate to say that winners are able to suspend disbelief, separating truth from usefulness.
But it’s important to understand what “suspending disbelief” actually means here. As I was explaining to a class yesterday, the difference between a confident state and a non-confident state is the number of simultaneous mental processes involved. (Subject to the disclaimer that everything I’m about to say here is not a “true” model, just a useful one!)
In a non-confident state, you run two processes: one to generate behavior, and the other one to limit it: self-critique, skepticism, analysis, etc. And it doesn’t matter what the target of that “critique process” is… the theory, the person teaching it to you, your own ability to learn, what other people are thinking of you while you do it, whatever. Makes no difference at all what the content of that critique process is, just that you have one.
Confidence, on the other hand, is just running the behavior-generating process. It’s literally “suspension of disbelief”, regardless of what the disbelief is targeted at. This is why so many self-help books urge suspension of disbelief while reading and trying the techniques they offer: it is in fact essential to being able to carry out any processes that make use of intuition or unconscious learning and computation.
(Because the conscious process ends up diverting needed resources or distracting one from properly attending to necessary elements of the ongoing experience.)
It’s also pretty essential to behaving in a confident way—when you’re confident, you’re not simultaneously generating behavior and critiquing; you fully commit to one or the other at any given moment.
Anyway, I think it is a confusion to call this “believing falsely”—it is the absence of something, not the presence of something: i.e., it is simply the mental hygiene of refraining from engaging in mental processes that would interfere with carrying out a desired course of action. Intentionally believing falsely doesn’t make any sense, but refraining from interfering with yourself “acting as if” a model is true, is an entirely different ball game.
The real purpose of the made-up theories found in self-help books is to give people a reason to let go of their disbeliefs and doubts: “I can’t do this”, “I’m no good at this”, “This stuff never works,” etc. If you can convince somebody to drop those other thoughts long enough to try something, you can get them to succeed. And as far as I have observed, this exact same principle is being taught by the self-help gurus, the marketing wizards, and even the pickup people.
And it’s probably a big part of why people think that being “too rational” is a success hazard… because it is, if you can’t let go of it when you’re trying to learn or do things that require unconscious competence.