I think this post is doing a simplification which is common in our community, and at some point we need to acknowledge the missing nuance there. The implicit assumption is that rationality is obviously always much better than believing whatever is socially expedient, and everyone who reject rationality are just doing a foolish error. In truth, there are reasons we evolved to believe whatever is socially expedient[1], and these reasons are still relevant today. Specifically, this is a mechanism for facilitating cooperation (which IMO can be given a rational, game-theoretic explanation). Moreover, it seems likely that for most people, during most of history, this strategy was the right choice.
IMO there are two major reasons why in these times rationality is the superior strategy, at least for the type of people drawn to LessWrong and in some parts of the world. First, the stakes are enormous. The freedom we enjoy in the developed world, and the pace of technological progress create many opportunities for large gains, from founding startups to literally saving the world from destruction. Given such stakes, the returns on better reasoning are large. Second, we can afford the cost. Because of freedom and individualism, we can profess unpopular beliefs and not be punished too heavily for it. EDIT: And, the Internet allows finding likeminded people even if you’re weird.
The self-deceptive strategy has a serious failure mode: while you’re self-deceiving, you cannot fully use your mental faculties to reassess the decision to self-deceive. (See also “against double think”). When self-deception is the right choice, that’s not a problem. But when it’s the wrong choice, it gets you stuck in a hard to escape attractor. This I think is the main source of obstacles on the path of coming over to rationality, when coming over to rationality is the right choice.
More precisely, pretend to believe by using the conscious mind as a mask. EDIT: We intuitively divide questions into low-stakes (where knowing what’s true has few effects on our lives the causality of which doesn’t go through social reactions to the belief) and high-stakes (where knowing what’s true does have direct effects on our lives). We then try to form accurate conscious beliefs about the latter and socially expedient conscious beliefs about the former. We do have more accurate intuitive beliefs about former, but they do not enter consciousness and their accuracy suffers since we cannot utilize consciousness to improve them. See also “belief in belief”
IMO there are two major reasons why in these times rationality is the superior strategy, at least for the type of people drawn to LessWrong and in some parts of the world.
A third reason is that believing in whatever is socially expedient works much better when the socially expedient beliefs have been selected to be generally adaptive. The hunter-gatherer environment didn’t change much and culture had plenty of time to be selected for generally beneficial beliefs, but that’s not the case for today’s beliefs:
The trouble with our world is that it is changing. Henrich focuses on small scale societies. These societies are not static. The changes they undergo are often drastic. But the distance between the life-style of a forager today and that of her ancestors five hundred years ago pales next to the gap that yawns between the average city-slicker and her ancestors five centuries past. Consider the implications of what demographers call the “demographic transition model:”
Each stage in the model presents a different sort of society than that which came before it. Very basic social and economic questions—including subsistence strategy, family type, mechanisms for mate selection, and so forth—change substantially as societies move through one stage to the next. Customs and norms that are adaptive for individuals in stage two societies may not be adaptive for individuals in living in stage four societies.
If the transition between these stages was slow this would not matter much. But it is not. Once stage two begins, each stage is only two or three generations long. Europeans, Japanese, Taiwanese, and South Koreans born today look forward to spending their teenage years in stage five societies. What traditions could their grandparents give them that might prepare them for this new world? By the time any new tradition might arise, the conditions that made it adaptive have already changed.
This may be why the rationalist impulse wrests so strong a hold on the modern mind. The traditions are gone; custom is dying. In the search for happiness, rationalism is the only tool we have left.
Upvoted, it’s also correct to ask whether taking this route is ‘worth it’.
I am skeptical of “Moreover, it seems likely that for most people, during most of history, this strategy was the right choice.” Remember that half of all humans existed after 1309. In 1561 Francis Bacon was born, who invented the founding philosophy and infrastructure of science. So already it was incredibly valuable to restructure your mind to track reality and take directed global-scale long-term action.
And plausibly it was so before then as well. I remember being surprised reading Vaniver’s account of Xunzi’s in 300 BC, where Vaniver said:
By the end of it, I was asking myself, “if they had this much of rationality figured out back then, why didn’t they conquer the world?” Then I looked into the history a bit more and figured out that twoof Xunzi’s students were core figures in Qin Shi Huang’s unification of China to become the First Emperor.
Francis Bacon’s father was a successful politician and a knight. Bacon was born into an extremely privileged position in the world, and wasn’t typical by any margin. Moreover, ey were, quoting Wikipedia, a “devout Anglican”, so ey only went that far in eir rationality.
If I, a rationalist atheist, was in Francis Bacon’s shoes I would 100% live my life in such a way that history books would record me as being a “devout Anglican”.
Sure. But, in order to lie without the risk of being caught, you need to simulate the person who actually is a devout Anglican. And the easiest way to do that is, having your conscious self actually be a devout Anglican. Which can be a rational strategy, but which isn’t the thing we call “rationality” in this context.
Another thing is, we can speak of two levels of rationality: “individual” and “collective”. In individual rationality, our conscious beliefs are accurate but we keep them secret from others. In collective rationality, we have a community of people with accurate conscious beliefs who communicate them with each other. The social cost of collective rationality is greater, but the potential benefits are also greater, as they are compounded through collective truth-seeking and cooperation.
This isn’t much of an update to me. It’s like if you told me that a hacker broke out of the simulation, and I responded that it isn’t that surprising they did because they went to Harvard. The fact that someone did it all is the primary and massive update that it was feasible and that this level of win was attainable for humans at that time if they were smart and determined.
We’re discussing the question of whether for most people in the past, rationality was a strategy inferior to having a domain where conscious beliefs are socially expedient rather than accurate. You gave Francis Bacon as a counterexample. I pointed out that, first, Bacon was atypical along the very axes that I claim make rationality the superior choice today (having more opportunities and depending less on others). This weakens Bacon’s example as evidence against my overall thesis. Second, Bacon actually did maintain socially expedient beliefs (religion, although I’m sure it’s not the only one). There is a spectrum between average-Jane-strategy and “maximal” self-honesty, and Bacon certainly did not go all the way towards maximal self-honesty.
I think the thing I want here is a better analysis of the tradeoff and when to take it (according to one’s inside view), rather than something like an outside view account that says “probably don’t”.
(And you are indeed contributing to understanding that tradeoff, your first comment indeed gives two major reasons, but it still feels to me true to say about many people in history and not just people today.)
Suppose we plot “All people alive” on the x-axis, and “Probability you should do rationality on your inside view” on the y-axis. Here are two opinions one could have about people during the time of Bacon.
I want to express something more like the second one than the first.
The implicit assumption is that rationality is obviously always much better than [...]
Instead of doing a better thing, one might do the more integrity-signaling thing, or pursue scholarship, or maximize personal wealth. Expecting an assumption about what’s better relies on the framing of human pursuits as better-things-seeking.
By “better” I mean “better in terms of the preferences of the individual” (however, we also constantly self-deceive about what our preferences actually are).
But if a person pursues something for reasons other than considering it the better thing, then the concept of “better” is useless for explaining their behavior. It might help with changing their behavior, if they might come to be motivated by the concept of “better”, and form an understanding of what that might be. Before that happens, there is a risk of confusing the current pursuit (revealed preference) with a nascent explicitly conceptualized preference (the concept of “better”) that’s probably very different and might grow to fill the role of their pursuit if the person decides to change for the better (losing integrity/scholarly zeal/wealth/etc.).
Hmm, I think we might be talking past each other for some reason. IMO people have approximately coherent preferences (that do explain their behavior), but they don’t coincide with what we consciously consider “good”, mostly because we self-deceive about preferences for game theory reasons.
The distinction between observed behavior (preferences that do explain behavior) and endorsed preference (a construction of reason not necessarily derived from observation of behavior) is actionable. It’s not just a matter of terminology (where preference is redefined to be whatever observed behavior seems to seek) or hypocrisy (where endorsed preference is public relations babble not directly involved in determining behavior). Both senses of “preference” can be coherent. But endorsed preference can start getting increasignly involved in determining the purposes of observed behavior, and plotting how this is to happen requires keeping the distinction clear.
I think that the “endorsed” preference mostly affects behavior only because of the need to keep up the pretense. But also, I’m not sure how your claim is related to my original comment?
Humans can be spontaneous (including in the direction of gradual change). It’s possible to decide to do an unreasonable thing unrelated to revealed preference or previous activity. Thus the need to keep up the pretense is not a necessary ingredient of the relationship between behavior and endorsed preference. It’s possible to start out an engineer, then change behavior to pursuit of musical skill, all the while endorsing (but not effecting) promotion of communism as the most valuable activity. Or else the behavior might have changed to pursuit of promotion of communism. There is no clear recipe to these things, only clear ingredients that shouldn’t be mixed up.
I’m not sure how your claim is related to my original comment
The statement in the original comment framed pursuit of rationality skills as pursuit of things that are better. This seems to substitute endorsed preference (things that are better) for revealed preference (actual pursuit of rationality skills). As I understand this, it’s not necessary to consider an actual pursuit a good thing, but it’s also prudent to keep track of what counts as a good thing, as it might one day influence behavior.
IMO going from engineer to musician is not a change of preferences, only a change of the strategy you follow to satisfy those preferences. Therefore, the question is, is rationality a good strategy for satisfying the preferences you are already trying to satisfy.
IMO going from engineer to musician is not a change of preferences, only a change of the strategy you follow to satisfy those preferences.
I would say about a person for whom this is accurate that they didn’t really care about engineering, or then music. But there are different people who do care about engineering, or about music. There is a difference between the people who should be described as only changing their strategy, and those who change their purpose. I was referring to the latter, as an example analogous to changing one’s revealed preference to one’s endorsed preference, without being beholden to any overarching ambient preference satisfied by either.
IMO such “change of purpose” doesn’t really exist. Some changes happen with aging, some changes might be caused by drugs or diet, but I don’t think conscious reasoning can cause it.
I think this post is doing a simplification which is common in our community, and at some point we need to acknowledge the missing nuance there. The implicit assumption is that rationality is obviously always much better than believing whatever is socially expedient, and everyone who reject rationality are just doing a foolish error. In truth, there are reasons we evolved to believe whatever is socially expedient[1], and these reasons are still relevant today. Specifically, this is a mechanism for facilitating cooperation (which IMO can be given a rational, game-theoretic explanation). Moreover, it seems likely that for most people, during most of history, this strategy was the right choice.
IMO there are two major reasons why in these times rationality is the superior strategy, at least for the type of people drawn to LessWrong and in some parts of the world. First, the stakes are enormous. The freedom we enjoy in the developed world, and the pace of technological progress create many opportunities for large gains, from founding startups to literally saving the world from destruction. Given such stakes, the returns on better reasoning are large. Second, we can afford the cost. Because of freedom and individualism, we can profess unpopular beliefs and not be punished too heavily for it. EDIT: And, the Internet allows finding likeminded people even if you’re weird.
The self-deceptive strategy has a serious failure mode: while you’re self-deceiving, you cannot fully use your mental faculties to reassess the decision to self-deceive. (See also “against double think”). When self-deception is the right choice, that’s not a problem. But when it’s the wrong choice, it gets you stuck in a hard to escape attractor. This I think is the main source of obstacles on the path of coming over to rationality, when coming over to rationality is the right choice.
More precisely, pretend to believe by using the conscious mind as a mask. EDIT: We intuitively divide questions into low-stakes (where knowing what’s true has few effects on our lives the causality of which doesn’t go through social reactions to the belief) and high-stakes (where knowing what’s true does have direct effects on our lives). We then try to form accurate conscious beliefs about the latter and socially expedient conscious beliefs about the former. We do have more accurate intuitive beliefs about former, but they do not enter consciousness and their accuracy suffers since we cannot utilize consciousness to improve them. See also “belief in belief”
A third reason is that believing in whatever is socially expedient works much better when the socially expedient beliefs have been selected to be generally adaptive. The hunter-gatherer environment didn’t change much and culture had plenty of time to be selected for generally beneficial beliefs, but that’s not the case for today’s beliefs:
Upvoted, it’s also correct to ask whether taking this route is ‘worth it’.
I am skeptical of “Moreover, it seems likely that for most people, during most of history, this strategy was the right choice.” Remember that half of all humans existed after 1309. In 1561 Francis Bacon was born, who invented the founding philosophy and infrastructure of science. So already it was incredibly valuable to restructure your mind to track reality and take directed global-scale long-term action.
And plausibly it was so before then as well. I remember being surprised reading Vaniver’s account of Xunzi’s in 300 BC, where Vaniver said:
Francis Bacon’s father was a successful politician and a knight. Bacon was born into an extremely privileged position in the world, and wasn’t typical by any margin. Moreover, ey were, quoting Wikipedia, a “devout Anglican”, so ey only went that far in eir rationality.
If I, a rationalist atheist, was in Francis Bacon’s shoes I would 100% live my life in such a way that history books would record me as being a “devout Anglican”.
Sure. But, in order to lie without the risk of being caught, you need to simulate the person who actually is a devout Anglican. And the easiest way to do that is, having your conscious self actually be a devout Anglican. Which can be a rational strategy, but which isn’t the thing we call “rationality” in this context.
Another thing is, we can speak of two levels of rationality: “individual” and “collective”. In individual rationality, our conscious beliefs are accurate but we keep them secret from others. In collective rationality, we have a community of people with accurate conscious beliefs who communicate them with each other. The social cost of collective rationality is greater, but the potential benefits are also greater, as they are compounded through collective truth-seeking and cooperation.
This isn’t much of an update to me. It’s like if you told me that a hacker broke out of the simulation, and I responded that it isn’t that surprising they did because they went to Harvard. The fact that someone did it all is the primary and massive update that it was feasible and that this level of win was attainable for humans at that time if they were smart and determined.
We’re discussing the question of whether for most people in the past, rationality was a strategy inferior to having a domain where conscious beliefs are socially expedient rather than accurate. You gave Francis Bacon as a counterexample. I pointed out that, first, Bacon was atypical along the very axes that I claim make rationality the superior choice today (having more opportunities and depending less on others). This weakens Bacon’s example as evidence against my overall thesis. Second, Bacon actually did maintain socially expedient beliefs (religion, although I’m sure it’s not the only one). There is a spectrum between average-Jane-strategy and “maximal” self-honesty, and Bacon certainly did not go all the way towards maximal self-honesty.
I think the thing I want here is a better analysis of the tradeoff and when to take it (according to one’s inside view), rather than something like an outside view account that says “probably don’t”.
(And you are indeed contributing to understanding that tradeoff, your first comment indeed gives two major reasons, but it still feels to me true to say about many people in history and not just people today.)
Suppose we plot “All people alive” on the x-axis, and “Probability you should do rationality on your inside view” on the y-axis. Here are two opinions one could have about people during the time of Bacon.
I want to express something more like the second one than the first.
Instead of doing a better thing, one might do the more integrity-signaling thing, or pursue scholarship, or maximize personal wealth. Expecting an assumption about what’s better relies on the framing of human pursuits as better-things-seeking.
By “better” I mean “better in terms of the preferences of the individual” (however, we also constantly self-deceive about what our preferences actually are).
But if a person pursues something for reasons other than considering it the better thing, then the concept of “better” is useless for explaining their behavior. It might help with changing their behavior, if they might come to be motivated by the concept of “better”, and form an understanding of what that might be. Before that happens, there is a risk of confusing the current pursuit (revealed preference) with a nascent explicitly conceptualized preference (the concept of “better”) that’s probably very different and might grow to fill the role of their pursuit if the person decides to change for the better (losing integrity/scholarly zeal/wealth/etc.).
Hmm, I think we might be talking past each other for some reason. IMO people have approximately coherent preferences (that do explain their behavior), but they don’t coincide with what we consciously consider “good”, mostly because we self-deceive about preferences for game theory reasons.
The distinction between observed behavior (preferences that do explain behavior) and endorsed preference (a construction of reason not necessarily derived from observation of behavior) is actionable. It’s not just a matter of terminology (where preference is redefined to be whatever observed behavior seems to seek) or hypocrisy (where endorsed preference is public relations babble not directly involved in determining behavior). Both senses of “preference” can be coherent. But endorsed preference can start getting increasignly involved in determining the purposes of observed behavior, and plotting how this is to happen requires keeping the distinction clear.
I think that the “endorsed” preference mostly affects behavior only because of the need to keep up the pretense. But also, I’m not sure how your claim is related to my original comment?
Humans can be spontaneous (including in the direction of gradual change). It’s possible to decide to do an unreasonable thing unrelated to revealed preference or previous activity. Thus the need to keep up the pretense is not a necessary ingredient of the relationship between behavior and endorsed preference. It’s possible to start out an engineer, then change behavior to pursuit of musical skill, all the while endorsing (but not effecting) promotion of communism as the most valuable activity. Or else the behavior might have changed to pursuit of promotion of communism. There is no clear recipe to these things, only clear ingredients that shouldn’t be mixed up.
The statement in the original comment framed pursuit of rationality skills as pursuit of things that are better. This seems to substitute endorsed preference (things that are better) for revealed preference (actual pursuit of rationality skills). As I understand this, it’s not necessary to consider an actual pursuit a good thing, but it’s also prudent to keep track of what counts as a good thing, as it might one day influence behavior.
IMO going from engineer to musician is not a change of preferences, only a change of the strategy you follow to satisfy those preferences. Therefore, the question is, is rationality a good strategy for satisfying the preferences you are already trying to satisfy.
I would say about a person for whom this is accurate that they didn’t really care about engineering, or then music. But there are different people who do care about engineering, or about music. There is a difference between the people who should be described as only changing their strategy, and those who change their purpose. I was referring to the latter, as an example analogous to changing one’s revealed preference to one’s endorsed preference, without being beholden to any overarching ambient preference satisfied by either.
IMO such “change of purpose” doesn’t really exist. Some changes happen with aging, some changes might be caused by drugs or diet, but I don’t think conscious reasoning can cause it.