But feel free to replace “self-serving bullshit” with whatever other specific deviation from rationality people may propose as a plan to win at life more.
It seems to me like most deviations people propose have clear short term gains but the costs of them are less obvious.
It might be that in a world with accurate lie detectors there are specific deviations that are actually improving people’s ability to win, but I’m not sure we currently live in a world where there are specific deviations from rationality that provide net benefits.
I also like to quote from Baron’s textbook on rationality (and how Eliezer defined rationality as well):
The best kind of thinking, which we shall call rational thinking, is whatever kind of thinking best helps people achieve their goals. If it should turn out that following the rules of formal logic leads to eternal happiness, then it is “rational thinking” to follow the laws of logic (assuming that we all want eternal happiness). If it should turn out, on the other hand, that carefully violating the laws of logic at every turn leads to eternal happiness, then it is these violations that we shall call “rational.”
When I argue that certain kinds of thinking are “most rational,” I mean that these help people achieve their goals. Such arguments could be wrong. If so, some other sort of thinking is most rational.
This theoretical argument seems to have a problem that we are not perfectly rational anyway. Similar problem exists in consequentialism—in theory, you should choose the action that has the best possible outcomes, but in practice, this would require you to have perfect knowledge about everything, which you don’t have. So you need to try to guess the probabilities of various plans going wrong, and to face your cognitive biases. And then you get the less elegant reasoning, such as “this plan seems like it would have wonderful consequences… but also outside view says that everyone I know who tried this thing in the past turned out to be deluded and caused a lot of harm… but I am smarter than them, and know more about rationality and biases… but maybe I shouldn’t do it anyway...”.
The problem with the textbook example is that things sometimes do not conclusively “turn out” this or that way; instead we just get a lot of weak evidence that maybe together mostly points in one direction but maybe that is just a consequence of some bias, or maybe there is a 99% chance that something is true but the consequences are horrible if we get it wrong (e.g. the “black swan” situations). Perhaps some day we will know the exact amount of rationality that produces the best outcome for a human in 2020s, but I need to make some choices now.
There’s a question of what we mean with specific words. The talk about cognitive biases comes out of behavioral economics and in economics the rational actor is one that makes utility maximizing choices. As such a person with a high amount of rationality is a utility maximizer in the terms of economics.
Talking about “the exact amount of rationality” isn’t that useful in that regard.
If you instead ask what amount of using the scout mindset produces the best outcome for a human in the 2020s you have a much more concrete question. You additionally might split that into the question of whether having the scout mindset internally is useful and having it in a externally visible way.
In a heavily political environment a cynical person who says all the bullshit that makes them get ahead but internally knows what’s bullshit might be able to navigate better than the deluded believer. There’s a reason why Venkatesh Rao calls the people who believe the bullshit fully “clueless” and puts them at the bottom of the hierarchy.
A stereotypical programmer who sees themselves as valuing rationality and the truth, might speak up in a project meeting by focusing on what’s truly matters for the business but be ignorant of important truths about the political effects of speaking up. If you model that situation as the programmer being “rational” but the people saying the political bullshit at the meeting being “irrational” you are going to have a poor understanding of the situation.
That’s an interesting fact and a good point!
But feel free to replace “self-serving bullshit” with whatever other specific deviation from rationality people may propose as a plan to win at life more.
It seems to me like most deviations people propose have clear short term gains but the costs of them are less obvious.
It might be that in a world with accurate lie detectors there are specific deviations that are actually improving people’s ability to win, but I’m not sure we currently live in a world where there are specific deviations from rationality that provide net benefits.
I also like to quote from Baron’s textbook on rationality (and how Eliezer defined rationality as well):
This theoretical argument seems to have a problem that we are not perfectly rational anyway. Similar problem exists in consequentialism—in theory, you should choose the action that has the best possible outcomes, but in practice, this would require you to have perfect knowledge about everything, which you don’t have. So you need to try to guess the probabilities of various plans going wrong, and to face your cognitive biases. And then you get the less elegant reasoning, such as “this plan seems like it would have wonderful consequences… but also outside view says that everyone I know who tried this thing in the past turned out to be deluded and caused a lot of harm… but I am smarter than them, and know more about rationality and biases… but maybe I shouldn’t do it anyway...”.
The problem with the textbook example is that things sometimes do not conclusively “turn out” this or that way; instead we just get a lot of weak evidence that maybe together mostly points in one direction but maybe that is just a consequence of some bias, or maybe there is a 99% chance that something is true but the consequences are horrible if we get it wrong (e.g. the “black swan” situations). Perhaps some day we will know the exact amount of rationality that produces the best outcome for a human in 2020s, but I need to make some choices now.
There’s a question of what we mean with specific words. The talk about cognitive biases comes out of behavioral economics and in economics the rational actor is one that makes utility maximizing choices. As such a person with a high amount of rationality is a utility maximizer in the terms of economics.
Talking about “the exact amount of rationality” isn’t that useful in that regard.
If you instead ask what amount of using the scout mindset produces the best outcome for a human in the 2020s you have a much more concrete question. You additionally might split that into the question of whether having the scout mindset internally is useful and having it in a externally visible way.
In a heavily political environment a cynical person who says all the bullshit that makes them get ahead but internally knows what’s bullshit might be able to navigate better than the deluded believer. There’s a reason why Venkatesh Rao calls the people who believe the bullshit fully “clueless” and puts them at the bottom of the hierarchy.
A stereotypical programmer who sees themselves as valuing rationality and the truth, might speak up in a project meeting by focusing on what’s truly matters for the business but be ignorant of important truths about the political effects of speaking up. If you model that situation as the programmer being “rational” but the people saying the political bullshit at the meeting being “irrational” you are going to have a poor understanding of the situation.