The Tragedy of the Social Epistemology Commons
In Brief: Making yourself happy is not best achieved by having true beliefs, primarily because the contribution of true beliefs to material comfort is a public good that you can free ride on, but the signaling benefits and happiness benefits of convenient falsehoods pay back locally, i.e. you personally benefit from your adoption of convenient falsehoods. The consequence is that many people hold beliefs about important subjects in order to feel a certain way or be accepted by a certain group. Widespread irrationality is ultimately an incentive problem.
Note: this article has been edited to take into account Tom McCabe, Vladimir_M and Morendil’s comments1
In asking why the overall level of epistemic rationality in the world is low and what we can do to change that, it is useful to think about the incentives that many people face concerning the effects that their beliefs have on their quality of life, i.e. on how it is that beliefs make people win.
People have various real and perceived needs; of which our material/practical needs and our emotional needs are two very important subsets. Material/practical needs include adequate nutrition, warmth and shelter, clothing, freedom from crime or attack by hostiles, sex and healthcare. Our emotional needs include status, friendship, family, love, a feeling of belonging and perhaps something called “self actualization”.
Data strongly suggests that when material and practical needs are not satisfied, people live extremely miserable lives (this can be seen in the happiness/income correlation—note that very low incomes predict very low happiness). The comfortable life that we lead in developed countries seems to mostly protect us from the lowest depths of anguish, and I would postulate that a reasonable explanation is that almost all of us never starve, die of cold or get killed in violence.
The comfort that we experience (in the developed world) due to our modern technology is very much a product of the analytic-rational paradigm. That is to say a tradition of rational, analytic thinking stretching back through Watson & Crick, Bardeen, Einstein, Darwin, Adam Smith, Newton, Bacon, etc, is a crucial (necessary, and “nearly” sufficient) reason for our comfort.
However, that comfort is given roughly equally to everyone and is certainly not given preferentially to the kind of person who most contributed causally to it happening, including scientists, engineers and great thinkers (mostly because the people who make crucial contributions are usually dead by the time the bulk of the benefits arrive). To put it another way, irrationalists free-ride on the real-world material-comfort achievements of rationalists.
This means that once you find yourself in a more economically developed country, your individual decisions in improving the quality of your own life will (mostly) not involve thinking in the rational vein that caused you to be at the quite high quality you are already at. I have been reading a good self-help book which laments that studies have shown that 50% of one’s happiness in life is genetically determined—highlighting the transhumanist case for re-engineering humanity for our own benefit—but that does not mean that to individually be more happy you should become an advocate for transhumanist paradise engineering, because such a project is a public good. It would be like trying to get to work faster by single-handedly building a subway.
The rational paradigm works well for societies, but not obviously for individuals
Instead, to ask what incentives apply to people’s choice of beliefs and overall paradigm is to ask what beliefs will best facilitate the fulfillment of those needs to which individual incentives apply. Since our material/practical needs are relatively easily fulfilled (at least amongst non-dirt-poor people in the west), we turn our attention to emotional needs such as:
love and belonging, friendship, family, intimacy, group membership, esteem, status and respect from others, sense of self-respect, confidence
The beliefs that most contribute to these things generally deviate from factual accuracy, because factually accurate beliefs are picked out as being “special” or optimal by the planning model of winning, but love, esteem and belonging are typically not achieved by coming up with a plan to get them (coming up with a plan to make someone like you is often called manipulative and is widely criticized). In fact, love and belonging are typically much better fostered by shared nonanalytic or false beliefs, for example a common belief in God or something like religion (e.g. New Age stuff), in a political party or left/right/traditional/liberal alignment, and/or by personality variables, which are themselves influenced by beliefs in a way that doesn’t go via the planning model.
The bottom line is that many people’s “map” is not really like an ordinary map, in that its design criterion is not simply to reflect the territory; it is designed to make them fit into a group (religion, politics), feel good about themselves (belief in immortal soul and life after death), fit into a particular cultural niche or signal personality (e.g. belief in Chakras/Auras). Because of the way that incentives are set up, this may in many cases be individually utility maximizing, i.e. instrumentally rational. This seems to fit with the data—80% of the world are theists, including a majority of people in the USA, and as we have complained many times on this site, the overall level of rationality across many different topics (quality of political debate, uptake of cryonics, lack of attention paid to “big picture” issues such as the singularity, dreadful inefficiency of charity) is low.
Bryan Caplan has an economic theory to formalize this: he calls it rational irrationality. Thanks to Vladimir_M for pointing out that Caplan had already formalized this idea:
If the most pleasant belief for an individual differs from the belief dictated by rational expectations, agents weigh the hedonic benefits of deviating from rational expectations against the expected costs of self-delusion.
Beliefs respond to relative price changes just like any other good. On some level, adherents remain aware of what price they have to pay for their beliefs. Under normal circumstances, the belief that death in holy war carries large rewards is harmless, so people readily accept the doctrine. But in extremis, as the tide of battle turns against them, the price of retaining this improbable belief suddenly becomes enormous. Widespread apostacy is the result as long as the price stays high; believers flee the battlefield in disregard of the incentive structure they recently affirmed. But when the danger passes, the members of the routed army can and barring a shift in preferences will return to their original belief. They face no temptation to convert to a new religion or flirt with atheism.
1: The article was originally written with a large emphasis on Maslow’s Hierarchy of needs, but it seems that this may be a “truthy” idea that propagates despite failures to confirm it experimentally.
- Abnormal Cryonics by 26 May 2010 7:43 UTC; 79 points) (
- A Challenge for LessWrong by 29 Jun 2010 23:47 UTC; 21 points) (
- 22 Dec 2012 23:24 UTC; 0 points) 's comment on That Thing That Happened by (
Although I largely agree, there is little actual experimental support for Maslow’s theory. He mostly just made it up. See http://lesswrong.com/lw/2j/schools_proliferating_without_evidence/ . See also eg.:
“The uncritical acceptance of Maslow’s need hierarchy theory despite the lack of empirical evidence is discussed and the need for a review of recent empirical evidence is emphasized. A review of ten factor-analytic and three ranking studies testing Maslow’s theory showed only partial support for the concept of need hierarchy. A large number of cross-sectional studies showed no clear evidence for Maslow’s deprivation/domination proposition except with regard to self-actualization. Longitudinal studies testing Maslow’s gratification/activation proposition showed no support, and the limited support received from cross-sectional studies is questionable due to numerous measurement problems.”
from “Maslow reconsidered: A review of research on the need hierarchy theory”, by Wahba and Bridwell.
A theory that fits one’s intuition and has limited support seems better than nothing at all, though I agree that Maslow is not at the level of, say, the theory of continental drift.
As far as the point of the article goes, it may be enough to simply note (as Caplan does) that societal and individual incentives to “consume” irrationality differ, bypassng the intuition-pumping but possibly tenuous ideas about Maslow levels.
OK, now take the next step. Since most people who are choosing love, belonging, and esteem over accuracy are not aware they are giving up accuracy, then you have to wonder how you can tell when you are doing so. If you are tempted to think that you are an exception who is willing to choose accuracy instead, ask if this is just another kind of group you want to join, or another kind of esteem you hope to acquire. If so, when would this lead you to actually choose more accuracy, vs. just to tell yourself you so choose?
Illusory superiority seems to be the cognitive bias to overcome here.
It seems to me that people here are very aware of the very real ways in which they are in fact very superior to the general public, even the elite public, and also of the very real ways in which they are, in most cases, very inferior to the elite public (and frequently somewhat inferior to the general public). The problem is that they tend to morally endorse both their strengths AND their weaknesses, seeing, as Robin does, both as ‘high’. I see the high/low motivation intuition as actually being somewhat rarer in the general world that Robin and most people here see it, and think that it is actually a much more common distinction in this culture than in most cultures. Partly people are simply choosing a measuring standard which makes them look impressive but partly they are unimpressive BECAUSE they have chosen, without fully understanding what they are doing or what the consequences are, to shape themselves in order to be impressive by a different standard than most people choose.
The best example I can think of for people valuing their flaws as well as their strong points would be smart people who think their lack of general social skills is proof of superiority. Or (though not a problem here) people who think street smarts mean that it’s best not to have intellectual skills.
How do you mean?
People don’t delude themselves about their own traits, they delude themselves about what traits a human without specific information about his or her self sees as good, choosing to see many of their own traits as good rather than as bad and failing to notice that people who lack those traits consistently see things otherwise. For instance, many people who are good at thinking productively about painful facts treat traits like an insufficient reluctance to harm those who are not so gifted as a virtue, even creating slogan to that effect such as “that which can be destroyed by the truth should be”. Others rightly see this as a justification for claiming moral superiority while harming those unlike themselves, largely do to status motives.
I’d have thought that the biggest risk from being attached to “That which can be destroyed by the truth should be” is being too ready to believe statements which can cause destruction.
If I may summarize: you’re saying that we tend to assume certain traits, like attachment to seeking the truth even at high cost, are self-evidently morally praiseworthy—an assumption not shared by the rest of society, which values other things more highly. Moreover, this assumption of ours comes ultimately from an impulse to status-seeking. Does that do it justice?
Assuming I do understand you, first, I would say that explaining a belief of this community as purely a result of status-seeking considerations is a Fully General Counterargument against any & all beliefs of any groups… you will be able to find status motives for almost all human actions, but this does not mean status is the primary cause or even is a cause at all, rather than an effect. One is reminded of how psychological egoists define altruism out of existence by finding some supposedly selfish positive motive in the altruist, like satisfaction at having helped.
I think there is a moral fact of the matter in whether the ability to face hard facts & force others to do so (within reason) is good or bad. You can prove to me that my endorsement of the ability is motivated by status, but I’ll still want to know how to actually act.
Well, first off it’s an impulse grounded in the desire to feel high status (feeling this is reinforcing), not an adaptive impulse to seek status. Second The impulse that outsiders would find not to be a virtue is less the courage of facing the truth than the callousness of inflicting it on others who lack that courage against their preferences and potentially against their interests as well.
When groups differ in values in a predictable manner, each choosing values that confer greater status upon itself, this is not a fully general counterargument. Nietzsche and others who display intellectual virtue in arguing for the primacy of physical and emotional virtues, for instance, can’t sensibly be accused of this. He may be seeking status with the quality of his rhetoric but not through any self-aggrandizing conclusions he reaches.
I think that there is a fact of the matter too, and that fact is that the ability to face hard facts without suffering is good even when taken to unreasonable extremes, and is in fact much better than non-Buddhists appreciate. The inclination to cause others to hold accurate beliefs is also overwhelmingly good, though not to such a shockingly extreme degree. The quote however is “that which can...” not “that which must...”. The impulse to use truth to destroy whatever can be destroyed with truth is a malevolent one, and this is obvious to anyone who doesn’t anticipate higher status in a world where the ability to face truth has its value increased.
I think I see your point… though the “What can be destroyed by the truth, should be” quote, I always took to be limited to the world of ideas; i.e., any idea that can be… should be.
But as has been remarked before, the more you load an aphorism down with caveats, the worse an aphorism it becomes. Or as Shakespeare famously said: “Brevity—in speech and conduct, but not in any other senses of the word—is the soul, understood metaphorically & non-dualistically of course, of wit.”
So while I see your concern in principle, I’m not too worried. I really doubt that LWers actually take this quote to mean you have a moral duty to, for example, visit old widowers and widows and berate them for the stupidity of talking to their dead lovers. Or find out and reveal the USA’s nuclear launch codes. Or forcibly “out” every young LGBT person they know.
Actually, those do seem like the sort of things that LWers would be much more inclined to do than would the general population. The odds would still be low in such spectacularly obvious cases, but much less low than they should be.
This seems to be contradictory to me. The first part says that says that people like us hold it to be virtuous not to introduce the truth to those who can’t deal with it, and the second suggests that we do so. What am I missing?
No, the first part says the opposite of what you think it says.
It is certainly true that transhumanism, existential risks and the singularity lay one open to wishful thinking—because many people involved endorse and focus on certain ideas and worldviews because they want to be a certain kind of person (e.g. The kind of person who “saves the world” or “makes a difference”), rather than wanting some more concrete near-mode thing like money or sex or even a certain specific, near outcome.
However, one should note that reasoning as if motivated cognition is infinitely difficult to defeat is not neccessarily the way to win. Perhaps we could assume some kind of middle ground involving giving up some lofty goals as hopelessly hard to debias about but not others.
Well hopefully we do end up at a middle ground, but it doesn’t come for free just by wishing for it. One solution is to tie some incentives to reality, so that you gain when you are right and lose when you are wrong.
For emphasis, again: The Proper Use of Humility.
OK. What incentive mechanisms are likely to work? We want an incentive that’s sufficiently robust to be useful feedback with some actual emotional punch, but lightweight enough to use habitually. If you commit yourself to strong disincentives for being wrong, then I suspect you’ll be less likely to use the mechanism, or you’ll fail to calibrate on uncertainty. If the mechanism involves significant transaction costs, then you’ll never use it but for the most serious concerns, or those concerns in which you benefit by making a visible commitment.
One way is to keep a journal of thoughts and opinions and current positions, however mundane. This can be a fairly lightweight commitment, no more than a few minutes a day. If you keep it private, you can freely record things that you think might be true, without the overhead of public commitment to a proposition. And if you check back through it months or years later, you can gather feedback about when you thought clearly, and when your thinking was motivated. Especially in personal matters, a difference of a few months can yield sharp contrast to your perspective.
Of course, another possibility is to have open discussions with other aspiring rationalists. State and defend positions that you hold with higher uncertainty than the positions you would usually commit to publicly, with the understanding that those positions and the ensuing discussion need not leave your circle. Seek out hints of biases in each other’s positions. If your circle values clarity and precision, then clarity and precision will be socially rewarded and imprecision will be socially punished. Admittedly, this idea has been covered before. I think, though, that the “rationality dojo” answers some of these issues.
Having open discussions with colleagues doesn’t directly create incentives tied to reality, unless those folks have such incentives. A journal might create such incentives if one took the trouble to score one’s previous statements against later revealed reality.
Agreed—a write-only journal does not help at all. The payoff comes from the sometimes-startling realization of the difference between one’s current and old views.
And yes, it almost seems easier for the “rationality dojo” to go wrong than right.
I’m new at this; let me expand what I think your point is. It’s easier to argue without referring to ground truth than it is to stay focused on reality. (See: almost any online argument.) The default incentive structure of an argument rewards winning the argument, not finding the truth. Thus, practicing argument will make the practitioner good at arguing, rather than finding truth. Since an incentive structure strongly shapes behavior even when we recognize the structure, the group will not learn rationality unless the group can make incentives to do so. So, the rationality dojo doesn’t solve the problem, it merely defers it to another level of organization.
On the other hand, the rationality dojo affords a wider set of incentives—we can, if careful, use group dynamics to shape the incentive structure. If successful there, we can use the desire to belong to a group to enhance those incentives. The dojo is a possibly useful ingredient in this incentive structure.
I read an account of someone reading their old journals, and the educational surprise was how little their views had changed. They were having what seemed like new revelations again and again, but they were the same revelations. I can’t remember what use the writer made of the overview.
If you’re going to keep harping on prediction markets, at least harp on them openly instead of insinuating.
One way to tie incentives to reality is simply to expect to survive until the relatively distant future and therefore have one’s plans pertaining to the future yeild results, good or bad. Being young and healthy helps this.
That works for topics on which longer lives create stronger incentives. Not clear how many topics that is.
Well, futurism and the singularity is clearly one of these topics, as we are talking about te events that will happen this century.
Wouldn’t rationality help people get things on the two bottom tiers? If so, shouldn’t your theory predict that people in more dire circumstances are more rational, when I believe the opposite tends to be the case?
A much, much simpler explanation is that rationality is hard and not-rationality is easy. Just as all good families are quite similar and bad families are often uniquely different, there’s basically one correct epistemology and a whole lot of incorrect ones. Because truly terrible, survival-inhibiting epistemologies have been eliminated through natural (and social) selection, we’re left with a bunch that, at the very least, do not inhibit reproduction. Extremely high-quality epistemology does not appear to be particularly conducive to Darwinian reproductive success, so it never exactly got selected for. Indeed, extremely high quality epistemology may be contingent on a certain level of scientific progress, and thus may have only been practicable in the past few centuries. Imagine trying to simply exist in the world 20,000 years ago, where the only answer you could give to nearly any question about nature or how the world worked was, “I haven’t a clue.”
Much like religion, one will generally end up with whatever epistemology one is raised with, with whatever particular modifications your mind makes intuitively. Since very few people have minds that gravitate towards a rational epistemology, and since society doesn’t particularly value epistemic hygiene, it’s little surprise that rationalists do not abound.
Furthermore, your hypothesis that rationality is not conducive to tiers three and four does not appear to be well-founded. I suspect a lot of people would be significantly benefit from greater marginal rationality. An old acquaintance of mine has a Facebook status that regularly oscillates between, “OMG so good to be in love!!!!!<3” and “Oh no my heart was broken again how could u do this to me?” I would anticipate that just a little bit more rationality would greatly benefit this person.
That’s a pretty risky analogy.
It’s the famous first line of Tolstoy’s Anna Karenina:
Originally, I used this mostly for literary effect, but now that I think about it, it seems quite appropriate.
There are numerous largely superficial factors to families—size, composition, socioeconomic status, and so forth. It seems to me there are criteria that really, systematically matter, that have to do with how people respect one another, the degree to which people accept conflict, the extent to which no family member is, for want of a better term, belligerently insane. Similarly, one can have a fairly rational epistemology and end up with many superficially different beliefs—both in the sense of values and likelihood estimates. One does need some systematic components, like a general avoidance of reliance on evidence-less faith, and a lack of major beliefs based on wishful thinking (among many other things). Similarly, one belligerently insane belief can destroy the rationality of a whole system, just as a sufficiently difficult family member can. On the other hand, one can have crazy family members and still have a generally functional family, just as one may have some irrational beliefs without it necessarily destroying their entire epistemology.
I certainly don’t mean to imply good families are inherently the same in terms of, say, gender composition, extended vs. nuclear, etc., and I did not intent to suggest they were. As it is though, I think the analogy holds up well.
It’s not clear to me what assumption you’re making here. Should this person shun relationships? Look for longer-term ones? Something else? The OMG part should remind us that new love is a powerful source of happiness and it’s not available in a long-term relationship. Maybe the OMG and “Oh no” parts add up to more than zero.
In my experience, you don’t have to sacrifice the “OMG” to mitigate the “oh no”. Learning to direct one’s thoughts is a really powerful force in making oneself happy.
I’m always amazed by the number of people who don’t seem to realize that it is possible guide one’s thoughts to control or modify emotions. People seem to think that happiness or unhappiness just happen, and it can’t be controlled. These people are usually much less happy than others.
These seem to me to be very sound criticisms and to pretty much defeat the main argument. It’s pretty obvious that rationality helps to climb from 0 to 2 and that it helps a lot. I’m going to be waiting to see whether this is a good enough rationalist community to converge in the recognition of critical flaws in arguments by voting the comment above way up.
http://kristof.blogs.nytimes.com/2010/05/22/how-about-a-beer/ is topical
If rationality would help people in the very poor parts of the world, but they shun it, then we conclude that those people are not responding to incentives.
However, the evidence you cite doesn’t support the assertion that the worlds poor don’t respond to incentives. It simply says that poor African men like sex and alcohol more than their own kids.
Do poor people fail to respond to incentives? Is there an obvious option for these people to learn how to be more rational/analytic that they aren’t taking? I don’t know.
What about the rich people in developed countries that I was talking about? My suspicion is that they respond to incentives, especially in terms of the rational/irrational tradeoff, i.e. If there were utility premiums for being more rational, people would expend effort learning the techniques.
This is an assumption, not a proven fact. Since it’s basically the contested issue, your reasoning is circular.
I did not say that rationality rather than irrationality helps individuals climb from level 0 to level 2.
...then the problem of nations remains. You limit your discussion to industrialized nations. This point is not relevant unless other nations are different. Moreover, some other nations are not clearing the first two stages, and individually or collectively, they are not particularly inclined to irrationality. Therefore, the proposition that we aren’t rational because we don’t really need to be is absurd, because the place that do need to be still aren’t, so that can’t be the cause.
You might be interested in Bryan Caplan’s concept of “rational irrationality”—it seems to be more or less what you’re aiming for:
http://econfaculty.gmu.edu/bcaplan/ratirnew.doc
*Abstract: Beliefs about politics and religion often have three puzzling properties: systematic bias, high certainty, and little informational basis. The theory of rational ignorance (Downs 1957) explains only the low level of information. The current paper presents a general model of “rational irrationality,” which explains all three stylized facts. According to the theory of rational irrationality, being irrational—in the sense of deviating from rational expectations—is a good like any other; the lower the private cost, the more agents buy. A peculiar feature of beliefs about politics, religion, etc. is that the private repercussions of error are virtually nonexistent, setting the private cost of irrationality at zero; it is therefore in these areas that irrational views are most apparent. The consumption of irrationality can be optimal, but it will usually not be when the private and the social cost of irrationality differ – for example, in elections.*
See also Caplan’s book for a popularized version of his research and it’s application to politics.
Ah, yes, that does appear to be roughly the same concept.
Correct.
Excellent post. I’m also reminded of this post that I just recently read: Change Incentives, Not Minds.
We can probably convince some intelligent people to become rationalists simply by talking to them, but many of them are of the type who’d have preferred to become rationalists anyway, even if they hadn’t known the word. If we want to raise the sanity waterline, we need to work on better incentives for rational thought.
Anybody have ideas on what might be realistically achievable rationality incentives?
One simple idea is to make rationality more popular, but even if that worked, it would probably just make “rationality” more popular, where “rationality” is defined as screaming that your opponents are irrational.
Another possibility is to increase elitism in government and business wrt mathematical & scientific knowledge—this wouldn’t help the public at large be rational but it would at least avoid innumerate policy-makers to a certain extent.
I think perhaps the most important thing we could do, though, would be to aspire to becoming more well-rounded humans ourselves. The trouble with rationalists is that we aren’t typically the kind of people one appreciates unless one is already a rationalist. But a genuine phronimos is a very magnetic personality, a real rolemodel, because they have all the other graces as well. A personality like, say, Stephen Lewis, promoting rationalism—now that would be a force. (Don’t get me wrong, what he has been doing is more important).
Listening to John Lee Clary the former klansman talk about how he came to lose his certainty in that ideology reminded me of this issue.
The reverend Wade Watts of whom he speaks basically eats away at his defenses by setting a moral example for him. Take heed! People will rarely judge us purely on our ideas, so in a war of rhetoric, it is also skillfully applied pathos that is needed.
It’s not just about being more moral, at least as morality is generally understood.
It’s also about putting morality and mental superiority into action by being funny without being mean.
And I don’t trust that the last bit happened, even though the rest of it might be true.
“Whatever you do to that chicken, I’m going to do to you” is part of an old joke, and really rather an unlikely threat.
The version I’d heard (NPR’s “A Christmas Griot” and my favorite part of the show) involved a whole chicken and kissing it on the ass.
I agree with this, but the frame “well-rounded human being” seems to incorporate an assumption that something is “wrong” with rational people that is “right” with others, and I want to question that.
I would say, not wrong but insufficient. I like the distinction that virtue ethics makes: sophia is wisdom in the sense of thinking correctly about facts, whereas phronesis is practical wisdom; i.e., “doing the right things, to the right people, at the right time.”
We can probably convince some intelligent people to want to become rationalists simply by talking to them.
While it’s undeniable that the social prestige of a belief is not necessarily dependent on its truth, and that social pressures often prevent people from arriving at true beliefs, it’s also important to remember that:
human epistemic rationality is suboptimal across the board, even in domains where it’s not clear how having accurate beliefs would be socially costly;
even when true beliefs appear to have a social penalty, this need not be the case for a person with sufficient social skill;
the social costs of particular beliefs vary greatly over time (atheism now vs. in the past; or in the other direction, creationism); and
people value truth enough to be disturbed when this conflict is pointed out to them.
Hence I would warn against drawing the implication that the aspiring rationalist’s project is in any sense futile.
Social pressures also often encourage people to arrive at true beliefs. I’m not at all convinced that the net effect of social pressures runs counter to truth. There are some specific subject matters (“the afterlife” e.g.) that pose a problem, but then there are other specific areas (mathematics, for poor thinkers) where social pressures push people into correct beliefs they would otherwise lack.
I didn’t say that it is futile, merely that local incentives for hedonic irrationality are a serious challenge.
“To put it another way, irrationalists free-ride on the real-world material-comfort achievements of rationalists. ”
This is way too rationality-centric. People who don’t work a lot free-ride on the real-world material-comfort achievements of workaholics. People who are not creative or entrepreneurial free-ride on those who invent new products and services. There is a hell of a lot more to productivity and world accomplishments than rationality.
For example, a rational person might work much less than an irrational person who let their primitive desire to rise to the top of the tribe propel them into continuing to start new companies and increase their wealth after making their first billion, even though their second billion will make no actual difference to their standard of living. And thus that irrationally-driven billionaire might produce far more value for the world than the rationalist who quits after their first ten million to focus on consumption.
Roko, I’m surprised you didn’t see the flaw in this sentence. Trying to make a particular person like you (from scratch) is indeed a bad plan, but that’s not the thing one should necessarily attempt to do in the first place, and in fact it’s one of the traps people characteristically fall into by not being rational.
One who is trying to optimize “love, esteem and belonging” can do a lot with a bit of rational planning: actively seeking out communities of people with common interests, developing an external hobby that serves to raise one’s profile among current acquaintances, training oneself to confront social fears in a different way than before, etc. The failure of most people to do these things (and their tendency to treat their current social circle as their irreplaceable tribe, for instance) do cause real social costs.
I think that most of the meme of “trying to rationally think about social life = social failure” is actually due to the preponderance of autism spectrum people among those trying to apply rational thought to social life; for a given person, treating it more genuinely rationally seems to be a net gain.
My experience says yes and no. To push the envelope in terms if rationality should win, I think that we need to push this, but I don’t think it will be easy...
Voted up.
I think many people develop a rough map of other people’s beliefs, to the extent that they avoid saying things that would compromise fitting in the group they’re in. Speaking of which:
Trying to get to level 4 are we? (Clearly I’m not ;)) Conversely, you could argue that “irrationalists” are better at getting things done due to group leverage and rationalists free-ride of those achievements.
Possibly you’re unfairly conflating scientific knowledge and technical expertese with rationality: plenty of discoveries have been made without the aid of Bayesian Rationality, and a fair number (rising as you go back through history) without the Scientific Method either. Furthermore, this sort of knowledge doesn’t really translate into private benefits, only inventions that largely benefit everyone. Quite possibly it is the dutiful, the lucky and the knowledgeable are bearing the load.
Good post. I feel vaguely uneasy about unconditional praise for scientists, as I think there are far too many now, and many of them simply belong to a different species of politician, as illustrated in this post. Good scientists are heroes, bad scientists are just as annoying as everyone else (and maybe more so), and it’s very difficult for outsiders to tell the difference.
I guess the big issue is: rationality is hard and requires sacrifice, so there’s no reason to expect people to be rational unless there’s some kind of pressure on them to do so. In most areas of human activity, that pressure just isn’t strong enough.
Does “winning” mean achieving your “true” values, or does it just mean succeeding at doing something you’re trying to do? In the former case, equating “quality of life” to “winning” sneaks in the assumption that people are egoists (and don’t care about quantity of life).
Funnily enough, I was just thinking about why I personally don’t subscribe to full-on altruism towards all of humanity, and the answer that came to mind was “it just isn’t rewarded strongly enough to be worth it”
Nicely circular.
Indeed… one’s subconscious is good at circular logic.
That the post is explicitly titled “tragedy of the commons”, suggests the position that on the net people are worse off with taking the selfish path, and that current incentives create a problem.
Most people are egoists in the relevant sense—just look at the consumption of luxury goods in the west whilst poor people starve in Africa. Some people may aspire to altruism, but social epistemology is dominated by the concerns of the majority.
How much have scientists and engineers contributed to our standard of living? Probably a good amount, but why do we have scientists and engineers? My impression is that our current high standard of living is due to the fact that most people are rational, in certain domain specific ways. Namely, they adopt better tools when given the opportunity. They specialize and trade. It seems quite possible that ambitious entrepreneurs, rather than thoughtful rationalists deserve the most credit for past progress.
Even entrepreneurs capture very little of the value they create: I’ve heard estimates of 5%. Not to mention the massive degree of risk they shoulder (and humans have basically risk averse utility functions).
Humans don’t have utility functions; that’s why we’re so bad at this.
Money isn’t value, but what’s transferred in game-theoretic interactions isn’t value, it’s control. Money is control. It’s only possible to compare control of different agents, much less so the value they obtain, and it would be particularly hard to compare the value that 10^9 humans obtain with the value one person obtains. Saying that a person obtains 5% of control that the beneficiaries would be willing to give up in return does seem to make sense.
How well could that control be cashed out to improve the world to entrepreneur’s preference is a separate unrelated question. Maybe the benefit people receive directly through the innovation would constitute the majority of the improvement in value of the world according to entrepreneur’s preference, as compared to other effects of taking the beneficiary action, including the use of obtained money.
Seeing as control is a “timeless” notion, I don’t see how it can be “transferred”. Maybe a formalization would help.
While the income/happiness correlation does exist, it is an internal comparison rather than an external one. See for example this breakdown by country. The data suggests that people construct their notion of happiness in part comparatively to the material wealth of those around them. While this might not apply to very basic needs (i.e. people starving) it seems that this starts to have a substantial impact before all the physiological needs are met. I’m incidentally not convinced that the top tier on Maslow’s pyramid is anything other than a culturally mediated set of values rather than a set of intrinsic goods. There’s also a fair bit of empirical criticism of the levels of the hierarchy as a whole This is one good criticism that favors something approximating rationalism over Maslow’s set. I’ve been told that Wahba and Bridgewell’s papers in the 1970s also provide a lot of empirical evidence against Maslow although I haven’t read them myself.
The data I cited shows the correlation across countries, not within countries.
Interesting. I had not realized there was that strong a correlation across countries.
Expert opinions on:
Does money make you happy? (specifically: absolute spending power)
Does relative wealth make us happier than absolute wealth?
Interesting post, good explanation of what’s keeping rationality from being more practically useful, duly upvoted, but change the spelling in the title to “tragedy”.
Thanks. My brain just doesn’t see spelling …
Set your browser to do spell checking. It’ll highlight misspellings in input fields. Works in Firefox at least.
Not for the title field. How do you think I got the rest of the post to be correct?
Ah roght, it won’t work on the small text fields by default. Looks like it can be switched to do that too by setting layout.spellcheckDefault to 2 in about:config.
Where is the about:config file?
just type about:config into the address bar. Be careful with the settings there.
You’re getting at something that feels half-right, but only half.
Generalizing from too small a sample? I have argued previously that the analytic-rational paradigm mostly has an impact on societies, but societies are not homogenous collections of individuals: large numbers of individuals in “comfortable” societies are in fact quite miserable, the few who are exposed enough to the analytic-rational paradigm to become conversant with it are typically among a small fraction of the most comfortable.
Is Maslow’s hierarchy itself an accurate belief, btw? How did you judge its accuracy?
This is a good point, but for now I will note that I have some personal empirical evidence in favor of it, it seems to make sense from an evolutionary point of view (e.g. food before status, because if you run out of calories you die right there) and it is widely accepted, so I will consider it true until evidence is brought against it.
Roko:
That’s not necessarily true—there are instances where people starve themselves (though rarely to death) as part of a status-seeking effort.
In the modern developed world, food is dirt-cheap as long as you don’t engage in luxurious extravagance, so that even if you could stop eating altogether, you wouldn’t save a significant amount of money. However, in the past, when even the cheapest subsistence diet was a very large expense relative to income, many people would cut down on eating well beyond the point of discomfort to be able to afford various status-seeking goods to show off. Some other examples of status-seeking behavior that comes at the cost of starvation are religious fasting and dieting to improve one’s looks. Hunger strikes are a peculiar extreme example.
Overall, Maslow’s model is a useful first approximation, but nowhere near fully accurate.
I think “useful first approximation” is all I need here.
Re: it seems to make sense from an evolutionary point of view
This makes more sense—from an evolutionary point of view:
“Rebuilding Maslow’s pyramid on an evolutionary foundation”
http://www.psychologytoday.com/blog/sex-murder-and-the-meaning-life/201005/rebuilding-maslow-s-pyramid-evolutionary-foundation
The key point is that they are not miserable due to lack of food, shelter, sanitation or warmth. They don’t lack Maslow 1 and 2.
I come into contact (of some sort) on nearly a daily basis with folks who lack one or several of the above: of these I do mean miserable at one of these levels. Not just the homeless, but a growing number of people struggling to make ends meet.
What I’m getting at is that your generalization that “once you find yourself in a more economically developed country, your individual decisions in improving the quality of your own life will (mostly) aim to climb from level two of Maslow upwards” requires some strong qualifications.
As a case in point consider a friend of mine who hasn’t been paid by her employer for a few months, who has been enjoined by the court to keep showing up for work because failing to do so would be cause for termination of any benefits, and who therefore has to keep spending large amounts on oil to drive to work because in her employment-poor rural region most jobs are found at least one hour’s drive away. This is someone I met over ten years ago, who was then comfortably middle class.
This person’s situation isn’t unusual or atypical, but it seems to me that it fits very poorly into your scheme whereby you describe her as a “free rider”.
Conversely, quite a few people who benefit from the most visible trappings of the analytic-rational paradigm, by which I mean schools and universities, go on to occupy position in society which contribute little to the pool of knowledge, to the well-being of the majority, or in general to anything but their own narrow interest. Surely these folks, who presumably have the capacity to be rational, though they don’t exploit it on an everyday basis, also deserve the “free rider” label?
Yet another category your black-and-white scheme seems to ignore is people who are typically asked to sacrifice their lives in defense of the institutions without which the efforts of “scientists, engineers and great thinkers” would have no discernible impact. Among these, I would bet that “irrationalists” are disproportionately represented.
You are, I think, considerably underestimating the difficulty of describing “our modern technology” in terms that would allow identification of “the kind of person who most contributed causally to it happening”. The causal pathways are insanely long, intricately interwoven with the rest of the fabric of society, such that it is far from clear that the notion of “free riders” can apply at all.
Shorter: your post conveys an oversimplified conception of both science and society. (This is a common failing around these parts, possibly needing more posts like the one on the mangle, but this is a topic where the inferential distances appear to be substantial.)
Do you honestly think that this friend lives a worse life than a medieval peasant? Yes, people end up in financial trouble, but even in the USA they don’t starve.
Your point about soldiers is taken, but note that in the west, very few soldiers die. The casualty count for Iraq is a few hundred for the UK, out of an army of 100,000. Those are much better odds than a paleolithic or bronze age soldie. Would have faced.
The question at issue didn’t seem to be better vs. worse, but rather “motivated by what”. My friend is spending money on oil which she would prefer to spend on groceries or shelter, neither of which she has quite enough of, and spending it to keep a job which is currently paying her a salary of zero.
It’s not clear to me that this decision is a matter of being “motivated by” this or that category of primary social goods (“things it is rational to want whatever else one wants”), even once you drop (as you have) the idea of a linear ranking of these goods. She doesn’t feel like she is making a decision.
The title of your post suggests that its major point is (still, after your revisions) the italicized phrase “irrationalists free-ride on the [...] achievements of rationalists”. It’s not clear that rationality is a “commons” in the original sense of a scare resource that could be depleted by over-exploitation. It’s not clear what you suggest someone in my friend’s position could be doing to contribute closer to her fair share to the maintenance of “the analytic-rational paradigm”.
Despite a lower than minimum-wage revenue she is actually paying taxes this year, the apparent consequence of the latest tax reform. Would you argue taxes don’t count as contributing back to the commons? These taxes pay in part for universities and research institutions. Why does your post fail to mention taxes altogether?
Your final points I can agree with, in line with Gregory Bateson’s observations that sometimes going crazy can be a sane response to an insane situation. It’s a tragedy all right, but I’m not sure it can be called a tragedy of the commons.
My description of the situation would be more along the following lines. Particular epistemologies (practiced by few individuals) have had huge effect on human societies, transforming them in ways that resulted in much greater numbers of people, interacting in very different ways than in the past, and in particular having greater environmental effects.
One consequence of these changes has been to amplify the impacts of both kinds of epistemologies: whereas it didn’t particularly matter what the medieval peasant believed, in terms of impact on the medieval peasan’ts comfort, how the 21st century citizen in a developed country thinks does matter to his or her medium-term comfort (e.g. through party politics), and epistemological failures of 21st century elites may have a drastic impact on the citizen’s medium-term comfort.
We all need to learn how to think better, and soon, or we are all shouldering more risk than we’d be comfortable with if we did know better; but the consequences of our predecessors’ own improved thinking are playing a large part in distracting us from that objective. Our past successes at thinking better have made the world a more intellectually challenging place, which is hindering further success at thinking better.
Well, they don’t starve, usually, buuuuut...
Nice, and just the right length!
There’s some truth to this, but I think this is half the story.
Clearly people do trade rationality for signaling (as evidenced by people changing their predictions when you ask them to bet), but people are also bad at it when they’re trying (there are plenty of examples of people failing even when they don’t get any signaling benefit).
You have to decide how much effort to put into living up to your epistemic potential, and how much into increasing that potential.
It seems that we work harder on the latter, as well as the former in some specific cases where we know that epistemic rationality is especially important.
Equating “quality of life” to “winning” seems to me to sneak in the assumption that people are egoists with short time horizons (in their “true” values and not just their stated values).
Because it is anything else that irrational belief people on lesswrong take to signal their belonging to this community… Yes, everyone else is irrational, except the particular group speaker belongs to.
I can see thousands of identical claims being made in thousands of different communities, each criticizing other groups’ signaling irrationality.