I have a generally positive opinion of Peterson but I wouldn’t be adding anything to the conversation by talking about why, you already covered his good points. Instead I’ll talk about why I’m leery of dubbing him a Rationalist hero.
Peterson’s entire first podcast with Sam Harris was an argument over Peterson’s (ab)use of the word “truth”. I’m not sure if anyone walked away from that experience entirely sure what Peterson means when he says “truth”.
One can assume that he means something like “metaphorical truth”, that some stories contain a kind of truth that is more like “usefulness as a map” than “accurate reflection of objective reality”. Sam Harris’ rejoinder was along the lines that using stories as ways of discovering meaning is all well and good, but believing those stories are true leads to horrifying failure modes.
For example, if you believe some particular bit of metaphysical narrative is true, you feel compelled act on contingent details of the story that are unrelated to the intended moral. Insert your own favorite minor bit of religious dogma that led to hundreds of years of death and suffering.
At a civilizational level, the norm of “believing that false stories are actually true is good for society” selects for simple, stupid stories and/or interpretations of stories that effectively control and dominate the population.
I’ve seen hints that he’s written his bottom line with respect to Christianity. He’s trying to prove how it’s true, by constructing an elaborate framework that redefines “truth”, not figure out if it’s true. If it weren’t for this, and the general sense that he’s treading on really crucial concepts in order to make himself seem more cosmically right, then I would be more fully on board.
Not to bias anyone, but as anecdata a couple of my Christian friends have told me they find it difficult to understand Peterson’s framing of his (relationship with) Christianity either. So that from the perspective on the other end that Peterson is trying at some sketchy epistemology could be telling. Maybe he’s committing some kind of [golden mean fallacy[(https://en.wikipedia.org/wiki/Argumentto moderation) and advocating for widespread [cultural Christianity](https://en.wikipedia.org/wiki/Cultural_Christian).
His concept of truth seems to be the main beef rationalists have with Peterson, and something I’ve also struggled with for a while.
I think this is partly solved with a healthy application of Rationalist Taboo—Peterson is a pragmatist, and AFAICT the word truth de-references as “that which it is useful to believe” for him. In practice although he adds a bunch of “metaphorical truths” under this umbrella, I have not seen him espouse any literal falsehoods as “useful to believe,” so his definition is just a strict generalization of our usual notion of truth.
Of course I’m not entirely happy with his use of the word, but if you assume as I do “it is useful to believe what is literally true” (i.e. usefulness = accuracy for maps) then his definition agrees on the literal level with your usual notion of truth.
The question then is what he means by metaphorical truth, and in what sense this is (as he claims) a higher truth than literal truth. The answer is something like “metaphorical truth is extracted meta-stories from real human behavior that are more essential to the human experience than any given actual story they are extracted from.” E.g. the story of Cain and Abel is more true to the human experience than the story of how you woke up, brushed your teeth, and went to work this morning. This is where the taboo needs to come in: what he means by more true is “it is more useful learn from Cain and Abel as a story about the human experience than it is to learn from your morning routine.”
I claim that this is a useful way to think about truth. For any given mythological story, I know it didn’t actually happen, but I also know that whether or not it actually happened is irrelevant to my life. So the regular definition of truth “did it actually happen” is not the right generalization to this setting. Why should I believe useless things? What I really want to know is (a) what does this story imply about reality and how I should act in the world and (b) if I do act that way, does my life get better? The claim is that if believing a story predictably makes your life better, then you should overload your definition of truth and treat the story as “true,” and there is no better definition of the word.
Regarding the Christian stories, his lecture series is called “The Psychological Significance of the Bible” and I don’t think he endorses anything other than a metaphorical interpretation thereof. He has for example made the same type of claims about the truth of Christianity as about Daoism, old Mesopotamian myths, Rowling’s Harry Potter, and various Disney movies. People probably get confused when he says things like “Christianity is more true than literal truth” without realizing he says the same things about Pinocchio and The Little Mermaid.
I think it’s pretty risk to play Rationalist taboo with what other people are saying. It’s supposed to be a technique for clarifying an argument by removing a word from the discussion, preventing it from being solely an argument about definitions. I would like it if Peterson would taboo the word “truth”, yeah.
I also don’t think that dereferencing the pointer actually helps. I object to how he uses “truth”, and I also object to the idea that Harry Potter is (dereferenced pointer)->[more psychologically useful to believe and to use as a map than discoveries about reality arrived at via empiricism]. It’s uh … it’s just not. Very much not. Dangerous to believe that it is, even. Equally if not more dangerous to believe that Christianity is [more psychologically useful to believe and to use as a map than discoveries about reality arrived at via empiricism]. I might sign on to something like, certain stories from Christianity are [a productive narrative lens to try on in an effort to understand general principles of psychology, maybe, sometimes].
The claim is that if believing a story predictably makes your life better, then you should overload your definition of truth and treat the story as “true,” and there is no better definition of the word.
This is indeed a hazardous application of Dark Arts to be applied judiciously and hopefully very very rarely. As a rule of thumb, if you feel like calling Harry Potter “true”, you’ve probably gone too far, IMO.
I do wonder whether you would change your mind after checking the links by Gaius Leviathan IX in a comment below. A lot of those did strike me as “literal falsehoods”, and seem to go against the things you outlined here.
I have previously noticed (having watched a good hundred hours of Peterson’s lectures) all of these things and these seem to me to be either straight-up misinterpretation on the part of the listener (taboo your words!) or the tiny number of inevitable false positives that comes out of Peterson operating his own nonstandard cognitive strategy, which is basically UNSONG Kabbalah.
This overall argument reminds me of the kind of student who protests that “i isn’t actually a number” or “a step function doesn’t actually have a derivative.”
For any given mythological story, I know it didn’t actually happen, but I also know that whether or not it actually happened is irrelevant to my life. So the regular definition of truth “did it actually happen” is not the right generalization to this setting. Why should I believe useless things? What I really want to know is (a) what does this story imply about reality and how I should act in the world and (b) if I do act that way, does my life get better?
Yeah, the benefits of literal truth are more altruistic and long-term.
This is what worries me. I frankly haven’t looked into Peterson too closely, but what I’ve heard hasn’t impressed me. I found some of his quotes in OP’s piece to be quite insightful, but I don’t understand why he’s spoken of so glowingly when he apparently espouses theist beliefs regularly. Warning signs of halo effect?
Speaking for myself, I do not agree with some Peterson’s opinions, but I like him as a person. Probably the best way to explain it is that he is the kind of “politically controversial” person which I wouldn’t be scared to hypothetically find out that he is actually living next door to me.
I find the way he uses the word “truth” really annoying. Yet, if I told him, I don’t expect him to… yell some abuse at me, hit me with a bike lock, or try to get me fired from my job… just to give a few examples of recent instruments of political discourse. He would probably smile in a good mood, and then we could change the topic.
Peterson is definitely not a rationalist, but there is something very… psychologically healthy… about him. It’s like when you are in a room full of farts, already getting more or less used to it, and then suddenly someone opens the window and lets the fresh air in. He can have strong opinions without being nasty as a person. What a welcome change, just when it seemed to me that the political discourse online is dominated by, uhm, a combination of assholes and insane people (and I am not channeling Korzybski now; I am using the word in its old-fashioned sense).
I’d like to somehow combine the rationality of LessWrong with the personality of Peterson. To become a rational lobster, kind of. Smart, strong, and a nice neighbor.
EDIT:
I guess I wanted to say that I am not concerned with Peterson’s lack of x-rationality—but neither I deny it -- because I do not intend to use him as an example of x-rationality. In many aspects he talks nonsense (although probably not more than an average person). But he has other strengths I want to copy.
I see Peterson as a valid and valuable member of the “niceness and civilization” tribe, if there is such a thing. As opposed to e.g. people who happen to share my disrespect of religion and mysticism, but personality-wise are just despicable little Nazis, and I definitely wouldn’t want them as neighbors.
compartmentalization by theists makes it so they’re apparently as rational or competent in thought on a lot of topics as anyone else, despite disagreements regarding religion and theism;
bias in all its forms is so ubiquitous outside of a domain of beliefs related to skepticism or religion, non-theists often don’t make for more rational conversation partners than theists;
(this might be more unique to me, but) theists often have a better map of abstracts parts of the territory than non-theists.
An example of (3) is how seeking conflict resolution through peaceful and truth-seeking deliberation rather than through tribalism and force. I’ve observed Christians I know are likelier to stay politically moderate as politics has become more polarized the last couple years. Something about loving your neighbour and the universality of human souls being redeemable or whatever results in Christians opting for mistake theory over conflict theory than non-religious folk I know. In a roundabout way, some theists have reached the same conclusions regarding how to have a rational dialogue as LessWrongers.
All this combined has made it so myself and a few friends in the rationality community have become less worried about theism among someone’s beliefs than in the past. This is only true of a small number of religious people I tend to hang out with, which is a small sample, and my social exposure has pretty much always been set up to be to moderates, as opposed to a predominantly left-wing or right-wing crowd. If other rationalists share this attitude, this could be the reason for increased tolerance for prominent theism in rationalist discourse besides the halo effect.
Admittedly, even if being bullish about theists contributing to social epistemology isn’t due to the halo effect, ultimately it’s something that looks like a matter of social convenience, rather than a strategy optimized for truth-seeking. (Caveat: please nobody abruptly start optimizing social circles for truth-seeking and epistemic hygiene, i.e., cutting people out of your life who aren’t exemplary ultra-rationalists. This probably won’t increase your well-being or your ability to seek truth, long-term.)
I’d like to make clear that the claim i am making is more with respect to the assertions that Peterson is someone who has exemplary rationality, when that is clearly not the case. Rejecting religion is a sign that one is able to pass other epistemic hurdles. I used to be religious, I seriously thought about it because of the Sequences, and then I deconverted—that was that. I looked at it as the preschool entrance exam for tougher problems, so I took it quite seriously.
Also, I would never claim that theists are worse people in a moral sense. What is important to me, however, is that epistemic rigour in our community not be replaced by comforting rationalizations. I don’t know if that’s what’s happening here, but I have my suspicions.
Upvoted. Thanks for the clarifications. It seems you’re not talking about the mere presence of theists in the rationality community at all, but rather in spite of his theism, being at least poorly articulated, and everything else specious in his views, I agree it’s indeed alarming JBP might be seen as an exemplar of rationality. It’s my impression it is still a minority of community members who agree JBP is impressive. I’ve currently no more thoughts on how significant that minority is, or what its portents for the rest of rationality might be.
When it comes to epistemic rigour, you show in your post that you clearly have a strong personal motivation for believing that rejecting religion is a good sign to pass other epistemic hurdles but at the same you don’t provide any good evidence for the claim.
The priors for taken a particular single characteristic that’s tribal in nature like religious beliefs as a high information for whether or not a person is rational aren’t good.
I wouldn’t use rejection of religion as a signal—my guess is that most people who become atheists do so for social reasons. Church is boring, or upper-middle-class circles don’t take too kindly to religiosity, or whatever.
And is our community about epistemic rigor, or is it about instrumental rationality? If, as they say, rationality is about winning, the real test of rationality is whether you can, after rejecting Christianity, unreject it.
Have you read the sequences? I don’t mean this disrespectfully, but this issue is covered extremely thoroughly early on. If you want to win, your map has to be right. If you want to be able to make meaningful scientific discoveries, your map has to be right. If you hold on to beliefs that aren’t true, your map won’t be right In many areas.
Have you been following the arguments about the Sequences? This issue has been covered fairly thoroughly over the last few years.
The problem, of course, is that the Sequences have been compiled in one place and heavily advertised as The Core of Rationality, whereas the arguments people have been having about the contents of the Sequences, the developments on top of their contents, the additions to the conceptual splinter canons that spun off of LW in the diaspora period, and so on aren’t terribly legible. So the null hypothesis is the contents of the Sequences, and until the contents of the years of argumentation that have gone on since the Sequences were posted are written up into new sequences, it’s necessary to continually try to come up with ad-hoc restatements of them—which is not a terribly heartening prospect.
Of course, the interpretations of the sacred texts will change over the years, even as the texts themselves remain the same. So: why does it matter if the map isn’t right in many areas? Is there a general factor of correctness, such that a map that’s wrong in one area can’t be trusted anywhere? Will benefits gained from errors in the map be more than balanced out by losses caused by the same errors? Or is it impossible to benefit from errors in the map at all?
A correct epistemological process is likely to assign very low likelihood to the proposition of Christianity being true at some point. Even if Christianity is true, most Christians don’t have good epistemics behind their Christianity; so if there exists an epistemically justifiable argument for ‘being a Christian’, our hypothetical cradle-Christian rationalist is likely to reach the necessary epistemic skill level to see through the Christian apologetics he’s inherited before he discovers it.
At which point he starts sleeping in on Sundays; loses the social capital he’s accumulated through church; has a much harder time fitting in with Christian social groups; and cascades updates in ways that are, given the social realities of the United States and similar countries, likely to draw him toward other movements and behavior patterns, some of which are even more harmful than most denominations of Christianity, and away from the anthropological accumulations that correlate with Christianity, some of which may be harmful but some of which may be protecting against harms that aren’t obvious even to those with good epistemics. Oops! Is our rationalist winning?
To illustrate the general class of problem, let’s say you’re a space businessman, and your company is making a hundred space shekels every metric tick, and spending eighty space shekels every metric tick. You decide you want to make your company more profitable, and figure out that a good lower-order goal would be to increase its cash incoming. You implement a new plan, and within a few megaticks, your company is making four hundred space shekels every metric tick, and spending a thousand. Oops! You’ve increased your business’s cash incoming, but you’ve optimized for too low-order a goal, and now your business isn’t profitable anymore.
Now, as you’ve correctly pointed out, epistemic rationality is important because it’s important for instrumental rationality. But the thing we’re interested in is instrumental rationality, not epistemic rationality. If the instrumental benefits of being a Christian outweigh the instrumental harms of being a Christian, it’s instrumentally rational to be a Christian. If Christianity is false and it’s instrumentally rational to be a Christian, epistemic rationality conflicts with instrumental rationality.
This is the easy-to-summarize scaffolding of what I’ll call the conflict argument. It isn’t the argument itself—the proper form of the argument would require convincing examples of such a conflict, which of course this margin is too small to contain. In a sentence, it seems that there are a lot of complaints common in these parts—especially depression and lack of social ties—that are the precise opposites of instrumental benefits commonly attributed to religious participation. In more than a sentence, lambdaphagy’s Tumblr is probably the best place to start reading.
(I don’t mean to position this as the last word on the subject, of course—it’s just a summary of a post-Sequences development in parts of the rationalist world. It’s possible to either take this one step further and develop a new counterargument to the conflict argument or come up with an orthodox Sequencist response to it.)
But the thing we’re interested in is instrumental rationality, not epistemic rationality.
Ironically, this sentence is epistemically true but instrumentally very dangerous.
See, to accurately assess which parts of epistemic rationality one should sacrifice for instrumental improvements requires a whole lot of epistemic rationality. And once you’ve made that sacrifice and lost some epistemic rationality, your capacity to make such trade-offs wisely in the future is severely impaired. But if you just focus on epistemic rationality, you can get quite a lot of winning as a side effect.
To bring it back to our example: it’s very dangerous to convince yourself that Jesus died for your sins just because you notice Christians have more friends. To do so you need to understand why believing in Jesus correlates with having friends. If you have a strong enough understanding of friendship and social structures for that, you can easily make friends and build a community without Jesus.
But if you install Jesus on your system you’re now left vulnerable to a lot of instrumentally bad things, with no guarantee that you’ll actually get the friends and community you wanted.
Assuming that the instrumental utility of religion can be separated from the religious parts is an old misconception. If all you need is a bit of sociological knowledge, shouldn’t it be possible to just engineer a cult of reason? Well, as it turns out, people have been trying for centuries, and it’s never really stuck. For one thing, there are, in startup terms, network effects. I’m not saying you should think of St. Paul as the Zuckerberg of Rome, but I’ve been to one of those churches where they dropped all the wacky supernatural stuff and I’d rather go to a meetup for GNU Social power users.
For another thing, it’s interesting that Eliezer Yudkowsky, who seems to be primarily interested in intellectual matters that relate to entities that are, while constrained by the rules of the universe, effectively all-knowing and all-powerful, and who cultivated interest in the mundane stuff out of the desire to get more people interested in said intellectual matters, seems to have gotten unusually far with the cult-of-reason project, at least so far.
Of course, if we think of LW as the seed of what could become a new religion (or at least a new philosophical scene, as world-spanning empires sometimes generate when they’re coming off a golden age—and didn’t Socrates have a thing or two to say about raising the sanity waterline?), this discussion would have to look a lot different, and ideally would be carried out in a smoke-filled room somewhere. You don’t want everyone in your society believing whatever nonsense will help them out with their social climbing, for reasons which I hope are obvious. (On the other hand, if we think of LW as the seed of what could become a new religion, its unusual antipathy to other religions—I haven’t seen anyone deploy the murder-Gandhi argument to explain why people shouldn’t do drugs or make tulpas—is an indisputable adaptive necessity. So there’s that.)
If, on the other hand, we think of LW as some people who are interested in instrumental rationality, the case has to be made that there’s at least fruit we can reach without becoming giraffes in grinding epistemic rationality. But most of us are shut-ins who read textbooks for fun, so how likely should we think it is that our keys are under the streetlight?
its unusual antipathy to other religions—I haven’t seen anyone deploy the murder-Gandhi argument to explain why people shouldn’t do drugs or make tulpas
The murder-Gandhi argument against drugs is so common it has a name, “addiction.” Rationalists appear to me to have a perfectly rational level of concern about addiction (which means being less concerned about certain drugs, such as MDMA, and more concerned about other drugs, such as alcohol).
I am puzzled about how making tulpas could interfere with one’s ability to decide not to make any more tulpas.
The only explanation I caught wind of for the parking lot incident was that it had something to do with tulpamancy gone wrong. And I recall SSC attributing irreversible mental effects to hallucinogens and noting that a lot of the early proponents of hallucinogens ended up somewhat wacky.
But maybe it really does all work out such that the sorts of things that are popular in upper-middle-class urban twenty-something circles just aren’t anything to worry about, and the sorts of things that are unpopular in them (or worse, popular elsewhere) just are. What a coincidence!
Is your goal to have a small community of friends or to take over the world? The tightest-knit religions are the smaller and weirder ones, so if you want stronger social bonds you should join Scientology and not the Catholic church.
Or, you know, you can just go to a LessWrong meetup. I’ve been to one yesterday: we had cake, and wine, and we did a double crux discussion about rationality and self-improvement. I dare say that we’re getting at least half as much community benefit as the average church-goer, all for a modest investment of effort and without sacrificing our sanity.
If someone doesn’t have a social life because don’t leave their house, they should leave their house. The religious shut-ins who read the Bible for fun aren’t getting much social benefit either.
One day I will have to write a longer text about this, but shortly: it is a false dilemma to see “small and tight-knit community” and “taking over the world” as mutually exclusive. Catholic church is not a small community, but it contains many small communities. It is an “eukaryotic” community, containing both the tight-knit subgroups and the masses of lukewarm believers, which together contribute to its long-term survival.
I would like to see the rationalist community to become “eukaryotic” in a similar way. In certain ways it already happens: we have people who work at MIRI and CFAR, we have people who participate at local meetups, we have people who debate online. This diversity is strength, not weakness: if you only have one mode of participation, then people who are unable to participate in that one specific way, are lost to the community.
The tricky part is keeping it all together. Preventing the tight-knit groups from excommunicating everyone else as “not real members”, but also preventing the lukewarm members from making it all about social interaction and abandoning the original purpose, because both of those are natural human tendencies.
I imagine this could be tricky to research even if people wouldn’t try to obfuscate the reality (which they of course will). It would be difficult to distinguish “these two people conspired together” from “they are two extremely smart people, living in the same city, of course they are likely to have met each other”.
For example, in a small country with maybe five elite high schools, elite people of the same age have high probability to have been high-school classmates. If they later take over the world together, it would make a good story to claim that they already conspired to do that during the high school. Even if the real idea only came 20 years later, no one would believe it after some journalist finds out that actually they are former classmates.
So the information is likely to be skewed in both ways: not seeing connections where they are, and seeing meaningful connections in mere coincidences.
Small groups have a bigger problem: they won’t be very well documented. As far as I know, the only major source on the Junto is Ben Franklin’s autobiography, which I’ve already read.
Large groups, of course, have an entirely different problem: if they get an appreciable amount of power, conspiracy theorists will probably find out, and put out reams of garbage on them. I haven’t started trying to look into the history of the Freemasons yet because I’m not sure about the difficulty of telling garbage from useful history.
That makes more sense. Broadly, I agree with Jacobian here, but there are a few points I’d like to add.
First, it seems to me that there aren’t many situations in which this is actually the case. If you treat people decently (regardless of their religion or lack thereof), you are unlikely to lose friends for being atheist (especially if you don’t talk about it). Sure, don’t be a jerk and inappropriately impose your views on others, and don’t break it to your fundamentalist parents that you think religion is a sham. But situations where it would be instrumentally rational to believe falsely important things, the situations in which there really would be an expected net benefit even after factoring in the knock-on effects of making your epistemological slope just that bit more slippery, these situations seem constrained to “there’s an ASI who will torture me forever if I don’t consistently system-2 convince myself that god exists”. At worst, if you really can’t find other ways of socializing, keep going to church while internally keeping an accurate epistemology.
Second, I think you’re underestimating how quickly beliefs can grow their roots. For example, after reading Nate’s Dark Arts of Rationality, I made a carefully-weighed decision to adopt certain beliefs on a local level, even though I don’t believe them globally: “I can understand literallyanything if I put my mind to it for enough time”, “I work twice as well while wearing shoes”, “I work twice as well while not wearing shoes” (the internal dialogue for adopting this one was pretty amusing), etc. After creating the local “shoe” belief and intensely locally-believing it, I zoomed out and focused on labelling it as globally-false. I was met with harsh resistance from thoughts already springing up to rationalize why my shoes actually could make me work harder. I had only believed this ridiculous thing for a few seconds, and my subconscious was already rushing to its defense. For this reason, I decided against globally-believing anything I know to be false, even though it may be “instrumentally rational” for me to always study as if I believe AGI is a mere two decades away. I am not yet strong enough to do this safely.
Third, I think this point of view underestimates the knock-on effects I mentioned earlier. Once you’ve crossed that bright line, once “instrumental rationality let me be Christian” is established, what else is left? Where is the Schelling fence for beliefs? I don’t know, but I think it’s better to be safe than sorry—especially in light of 1) and 2).
It should be noted that there are practically-secular jewish communities that seem to get a lot of the benefit of religion, without actually believing in supernatural things. I haven’t visited one of those myself, but friends who looked into it seemed to think they were doing pretty well on the epistemics front. So for people interested in religion, but not interested in the supernatural-believing stuff: Maybe joining a secular jewish community would be a good idea?
It has to be correct and useful, and correctness only matters for winning inasmuch as it entails usefulness. Having a lot of correct information about golf is no good if you want to be a great chef.
Having correct object-level information and having a correct epistemological process and belief system are two different things. An incorrect epistemological process is likely to reject information it doesn’t like.
Right, that’s a possible response: the sacrifice of epistemic rationality for instrumental rationality can’t be isolated. If your epistemic process leads to beneficial incorrect conclusions in one area, your epistemic process is broadly incorrect, and will necessarily lead to harmful incorrect conclusions elsewhere.
But people seem to be pretty good at compartmentalizing. Robert Aumann is an Orthodox Jew. (Which is the shoal that some early statements of the general-factor-of-correctness position broke on, IIRC.) And there are plenty of very instrumentally rational Christians in the world.
On the other hand, maybe people who’ve been exposed to all this epistemic talk won’t be so willing to compartmentalize—or at least to compartmentalize the sorts of things early LW used as examples of flaws in reasoning.
I’m not sure how to square “rejecting religion is the preschool entrance exam of rationality” with “people are pretty good at compartmentalizing”. Certainly there are parts of the Sequences that imply the insignificance of compartmentalization.
I personally recall, maybe seven years ago, having to break the news to someone that Aumann is an Orthodox Jew. This was a big deal at the time! We tend to forget how different the rationalist consensus is from the contents of the Sequences.
Every once in a while someone asks me or someone I know about what “postrationality” is, and they’re never happy with the answer—“isn’t that just rationality?” Sure, to an extent; but to the extent that it is, it’s because “postrationality” won. And to tie this into the discussion elsewhere in the thread: postrationality mostly came out of a now-defunct secret society.
Your line of reasoning re: Aumann feels akin to “X billionaire dropped out of high school / college, ergo you can drop out, too”. Sure, perhaps some can get by with shoddy beliefs, but why disadvantage yourself?
I wouldn’t use rejection of religion as a signal
Point of clarification: are you claiming that rejecting religion provides no information about someone’s rationality, or that it provides insignificant information?
If postrationality really did win, I don’t know that it should have. I haven’t been convinced that knowingly holding false beliefs is instrumentally rational in any realistic world, as I outlined below.
Your line of reasoning re: Aumann feels akin to “X billionaire dropped out of high school / college, erg you can drop out, too”. Sure, perhaps some can get by with shoddy beliefs, but why disadvantage yourself?
If people are pretty good at compartmentalization, it’s at least not immediately clear that there’s a disadvantage here.
It’s also not immediately clear that there’s a general factor of correctness, or, if there is, what the correctness distribution looks like.
It’s at least defensible position that there is a general factor of correctness, but that it isn’t useful, because it’s just an artifact of most people being pretty dumb, and there’s no general factor within the set of people who aren’t just pretty dumb. I do think there’s a general factor of not being pretty dumb, but I’m not sure about a general factor of correctness beyond that.
It seems probable that “ignore the people who are obviously pretty dumb” is a novel and worthwhile message for some people, but not for others. I grew up in a bubble where everyone already knew to do that, so it’s not for me, but maybe there are people who draw utility from being informed that they don’t have to take seriously genuine believers in astrology or homeopathy or whatever.
Point of clarification: are you claiming that rejecting religion provides no information about someone’s rationality, or that it provides insignificant information?
In a purely statistical sense, rejecting religion almost certainly provides information about someone’s rationality, because things tend to provide information about other things. Technically, demographics provide information about someone’s rationality. But not information that’s useful for updating about specific people.
Religious affiliation is a useful source of information about domain-specific rationality in areas that don’t lend themselves well to compartmentalization. There was a time when it made sense to discount the opinions of Mormon archaeologists about the New World, although now that they’ve been through some time of their religiously-motivated claims totally failing to pan out it probably lends itself to compartmentalization alright.
On the other hand, I wouldn’t discount the opinions of Mormon historical linguists about Proto-Uralic. But I would discount the opinions of astrologers about Proto-Uralic, unless they have other historical-linguistic work in areas that I can judge the quality of and that work seems reasonable.
If postrationality really did win, I don’t know that it should have. I haven’t been convinced that knowingly holding false beliefs is instrumentally rational in any realistic world, as I outlined below.
Postrationality isn’t about knowingly holding false beliefs. Insofar as postrationality has a consensus that can be distilled to one sentence, it’s “you can’t kick everything upstairs to the slow system, so you should train the fast system.” But that’s a simplification.
“you can’t kick everything upstairs to the slow system, so you should train the fast system.”
I know that postrationality can’t be distilled to a single sentence and I’m picking on it a bit unfairly, but “post”-rationality can’t differentiate itself from rationality on that. Eliezer wrote about system 1 and system 2 in 2006:
When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2—fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren’t always true, and perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality”. Both systems can serve the goal of truth, or defeat it, according to how they are used.
And it’s not like this statement was ever controversial on LW.
You can’t get any more “core LW rationality” than the fricking Sequences. If someone thinks that rationality is about forcing everything into System 2 then, well, they should reread the fricking Sequences.
Minor: but I appreciate you using the word “fricking”, instead of the obvious alternative. For me, it feels like it gets the emphaticness across just as well, without the crudeness.
While Peterson is a bit sloppy when he talks about truth, the notion of truth that he is working with is not simply his own construction to write some bottom line. There is a lot of literature of pragmatist analyses of truth and belief that roughly align with what he is saying and I would consider closer to what is the nature of truth (truer about truth) than the correspondence theory of truth presented in the sequences.
I recommend Peirce’s Making our Ideas Clear, Putnam’s Corresponding with Reality, and James’s The Will to Believe. Peirce and James can easily be found free online by searching and I can PM you Putnam if you want it.
I have a generally positive opinion of Peterson but I wouldn’t be adding anything to the conversation by talking about why, you already covered his good points. Instead I’ll talk about why I’m leery of dubbing him a Rationalist hero.
Peterson’s entire first podcast with Sam Harris was an argument over Peterson’s (ab)use of the word “truth”. I’m not sure if anyone walked away from that experience entirely sure what Peterson means when he says “truth”.
One can assume that he means something like “metaphorical truth”, that some stories contain a kind of truth that is more like “usefulness as a map” than “accurate reflection of objective reality”. Sam Harris’ rejoinder was along the lines that using stories as ways of discovering meaning is all well and good, but believing those stories are true leads to horrifying failure modes.
For example, if you believe some particular bit of metaphysical narrative is true, you feel compelled act on contingent details of the story that are unrelated to the intended moral. Insert your own favorite minor bit of religious dogma that led to hundreds of years of death and suffering.
At a civilizational level, the norm of “believing that false stories are actually true is good for society” selects for simple, stupid stories and/or interpretations of stories that effectively control and dominate the population.
I’ve seen hints that he’s written his bottom line with respect to Christianity. He’s trying to prove how it’s true, by constructing an elaborate framework that redefines “truth”, not figure out if it’s true. If it weren’t for this, and the general sense that he’s treading on really crucial concepts in order to make himself seem more cosmically right, then I would be more fully on board.
Not to bias anyone, but as anecdata a couple of my Christian friends have told me they find it difficult to understand Peterson’s framing of his (relationship with) Christianity either. So that from the perspective on the other end that Peterson is trying at some sketchy epistemology could be telling. Maybe he’s committing some kind of [golden mean fallacy[(https://en.wikipedia.org/wiki/Argumentto moderation) and advocating for widespread [cultural Christianity](https://en.wikipedia.org/wiki/Cultural_Christian).
His concept of truth seems to be the main beef rationalists have with Peterson, and something I’ve also struggled with for a while.
I think this is partly solved with a healthy application of Rationalist Taboo—Peterson is a pragmatist, and AFAICT the word truth de-references as “that which it is useful to believe” for him. In practice although he adds a bunch of “metaphorical truths” under this umbrella, I have not seen him espouse any literal falsehoods as “useful to believe,” so his definition is just a strict generalization of our usual notion of truth.
Of course I’m not entirely happy with his use of the word, but if you assume as I do “it is useful to believe what is literally true” (i.e. usefulness = accuracy for maps) then his definition agrees on the literal level with your usual notion of truth.
The question then is what he means by metaphorical truth, and in what sense this is (as he claims) a higher truth than literal truth. The answer is something like “metaphorical truth is extracted meta-stories from real human behavior that are more essential to the human experience than any given actual story they are extracted from.” E.g. the story of Cain and Abel is more true to the human experience than the story of how you woke up, brushed your teeth, and went to work this morning. This is where the taboo needs to come in: what he means by more true is “it is more useful learn from Cain and Abel as a story about the human experience than it is to learn from your morning routine.”
I claim that this is a useful way to think about truth. For any given mythological story, I know it didn’t actually happen, but I also know that whether or not it actually happened is irrelevant to my life. So the regular definition of truth “did it actually happen” is not the right generalization to this setting. Why should I believe useless things? What I really want to know is (a) what does this story imply about reality and how I should act in the world and (b) if I do act that way, does my life get better? The claim is that if believing a story predictably makes your life better, then you should overload your definition of truth and treat the story as “true,” and there is no better definition of the word.
Regarding the Christian stories, his lecture series is called “The Psychological Significance of the Bible” and I don’t think he endorses anything other than a metaphorical interpretation thereof. He has for example made the same type of claims about the truth of Christianity as about Daoism, old Mesopotamian myths, Rowling’s Harry Potter, and various Disney movies. People probably get confused when he says things like “Christianity is more true than literal truth” without realizing he says the same things about Pinocchio and The Little Mermaid.
tl;dr: Taboo the word “truth”.
I think it’s pretty risk to play Rationalist taboo with what other people are saying. It’s supposed to be a technique for clarifying an argument by removing a word from the discussion, preventing it from being solely an argument about definitions. I would like it if Peterson would taboo the word “truth”, yeah.
I also don’t think that dereferencing the pointer actually helps. I object to how he uses “truth”, and I also object to the idea that Harry Potter is (dereferenced pointer)->[more psychologically useful to believe and to use as a map than discoveries about reality arrived at via empiricism]. It’s uh … it’s just not. Very much not. Dangerous to believe that it is, even. Equally if not more dangerous to believe that Christianity is [more psychologically useful to believe and to use as a map than discoveries about reality arrived at via empiricism]. I might sign on to something like, certain stories from Christianity are [a productive narrative lens to try on in an effort to understand general principles of psychology, maybe, sometimes].
This is indeed a hazardous application of Dark Arts to be applied judiciously and hopefully very very rarely. As a rule of thumb, if you feel like calling Harry Potter “true”, you’ve probably gone too far, IMO.
I do wonder whether you would change your mind after checking the links by Gaius Leviathan IX in a comment below. A lot of those did strike me as “literal falsehoods”, and seem to go against the things you outlined here.
I have previously noticed (having watched a good hundred hours of Peterson’s lectures) all of these things and these seem to me to be either straight-up misinterpretation on the part of the listener (taboo your words!) or the tiny number of inevitable false positives that comes out of Peterson operating his own nonstandard cognitive strategy, which is basically UNSONG Kabbalah.
This overall argument reminds me of the kind of student who protests that “i isn’t actually a number” or “a step function doesn’t actually have a derivative.”
Yeah, the benefits of literal truth are more altruistic and long-term.
This is what worries me. I frankly haven’t looked into Peterson too closely, but what I’ve heard hasn’t impressed me. I found some of his quotes in OP’s piece to be quite insightful, but I don’t understand why he’s spoken of so glowingly when he apparently espouses theist beliefs regularly. Warning signs of halo effect?
Speaking for myself, I do not agree with some Peterson’s opinions, but I like him as a person. Probably the best way to explain it is that he is the kind of “politically controversial” person which I wouldn’t be scared to hypothetically find out that he is actually living next door to me.
I find the way he uses the word “truth” really annoying. Yet, if I told him, I don’t expect him to… yell some abuse at me, hit me with a bike lock, or try to get me fired from my job… just to give a few examples of recent instruments of political discourse. He would probably smile in a good mood, and then we could change the topic.
Peterson is definitely not a rationalist, but there is something very… psychologically healthy… about him. It’s like when you are in a room full of farts, already getting more or less used to it, and then suddenly someone opens the window and lets the fresh air in. He can have strong opinions without being nasty as a person. What a welcome change, just when it seemed to me that the political discourse online is dominated by, uhm, a combination of assholes and insane people (and I am not channeling Korzybski now; I am using the word in its old-fashioned sense).
I’d like to somehow combine the rationality of LessWrong with the personality of Peterson. To become a rational lobster, kind of. Smart, strong, and a nice neighbor.
EDIT:
I guess I wanted to say that I am not concerned with Peterson’s lack of x-rationality—but neither I deny it -- because I do not intend to use him as an example of x-rationality. In many aspects he talks nonsense (although probably not more than an average person). But he has other strengths I want to copy.
I see Peterson as a valid and valuable member of the “niceness and civilization” tribe, if there is such a thing. As opposed to e.g. people who happen to share my disrespect of religion and mysticism, but personality-wise are just despicable little Nazis, and I definitely wouldn’t want them as neighbors.
I think:
compartmentalization by theists makes it so they’re apparently as rational or competent in thought on a lot of topics as anyone else, despite disagreements regarding religion and theism;
bias in all its forms is so ubiquitous outside of a domain of beliefs related to skepticism or religion, non-theists often don’t make for more rational conversation partners than theists;
(this might be more unique to me, but) theists often have a better map of abstracts parts of the territory than non-theists.
An example of (3) is how seeking conflict resolution through peaceful and truth-seeking deliberation rather than through tribalism and force. I’ve observed Christians I know are likelier to stay politically moderate as politics has become more polarized the last couple years. Something about loving your neighbour and the universality of human souls being redeemable or whatever results in Christians opting for mistake theory over conflict theory than non-religious folk I know. In a roundabout way, some theists have reached the same conclusions regarding how to have a rational dialogue as LessWrongers.
All this combined has made it so myself and a few friends in the rationality community have become less worried about theism among someone’s beliefs than in the past. This is only true of a small number of religious people I tend to hang out with, which is a small sample, and my social exposure has pretty much always been set up to be to moderates, as opposed to a predominantly left-wing or right-wing crowd. If other rationalists share this attitude, this could be the reason for increased tolerance for prominent theism in rationalist discourse besides the halo effect.
Admittedly, even if being bullish about theists contributing to social epistemology isn’t due to the halo effect, ultimately it’s something that looks like a matter of social convenience, rather than a strategy optimized for truth-seeking. (Caveat: please nobody abruptly start optimizing social circles for truth-seeking and epistemic hygiene, i.e., cutting people out of your life who aren’t exemplary ultra-rationalists. This probably won’t increase your well-being or your ability to seek truth, long-term.)
I’d like to make clear that the claim i am making is more with respect to the assertions that Peterson is someone who has exemplary rationality, when that is clearly not the case. Rejecting religion is a sign that one is able to pass other epistemic hurdles. I used to be religious, I seriously thought about it because of the Sequences, and then I deconverted—that was that. I looked at it as the preschool entrance exam for tougher problems, so I took it quite seriously.
Also, I would never claim that theists are worse people in a moral sense. What is important to me, however, is that epistemic rigour in our community not be replaced by comforting rationalizations. I don’t know if that’s what’s happening here, but I have my suspicions.
Upvoted. Thanks for the clarifications. It seems you’re not talking about the mere presence of theists in the rationality community at all, but rather in spite of his theism, being at least poorly articulated, and everything else specious in his views, I agree it’s indeed alarming JBP might be seen as an exemplar of rationality. It’s my impression it is still a minority of community members who agree JBP is impressive. I’ve currently no more thoughts on how significant that minority is, or what its portents for the rest of rationality might be.
When it comes to epistemic rigour, you show in your post that you clearly have a strong personal motivation for believing that rejecting religion is a good sign to pass other epistemic hurdles but at the same you don’t provide any good evidence for the claim.
The priors for taken a particular single characteristic that’s tribal in nature like religious beliefs as a high information for whether or not a person is rational aren’t good.
Crisis of Faith
I wouldn’t use rejection of religion as a signal—my guess is that most people who become atheists do so for social reasons. Church is boring, or upper-middle-class circles don’t take too kindly to religiosity, or whatever.
And is our community about epistemic rigor, or is it about instrumental rationality? If, as they say, rationality is about winning, the real test of rationality is whether you can, after rejecting Christianity, unreject it.
Have you read the sequences? I don’t mean this disrespectfully, but this issue is covered extremely thoroughly early on. If you want to win, your map has to be right. If you want to be able to make meaningful scientific discoveries, your map has to be right. If you hold on to beliefs that aren’t true, your map won’t be right In many areas.
Have you been following the arguments about the Sequences? This issue has been covered fairly thoroughly over the last few years.
The problem, of course, is that the Sequences have been compiled in one place and heavily advertised as The Core of Rationality, whereas the arguments people have been having about the contents of the Sequences, the developments on top of their contents, the additions to the conceptual splinter canons that spun off of LW in the diaspora period, and so on aren’t terribly legible. So the null hypothesis is the contents of the Sequences, and until the contents of the years of argumentation that have gone on since the Sequences were posted are written up into new sequences, it’s necessary to continually try to come up with ad-hoc restatements of them—which is not a terribly heartening prospect.
Of course, the interpretations of the sacred texts will change over the years, even as the texts themselves remain the same. So: why does it matter if the map isn’t right in many areas? Is there a general factor of correctness, such that a map that’s wrong in one area can’t be trusted anywhere? Will benefits gained from errors in the map be more than balanced out by losses caused by the same errors? Or is it impossible to benefit from errors in the map at all?
No, I’m fairly new. Thanks for the background.
What would the benefits be of “unrejecting” Christianity, and what would that entail? I’d like to understand your last point a little better.
A correct epistemological process is likely to assign very low likelihood to the proposition of Christianity being true at some point. Even if Christianity is true, most Christians don’t have good epistemics behind their Christianity; so if there exists an epistemically justifiable argument for ‘being a Christian’, our hypothetical cradle-Christian rationalist is likely to reach the necessary epistemic skill level to see through the Christian apologetics he’s inherited before he discovers it.
At which point he starts sleeping in on Sundays; loses the social capital he’s accumulated through church; has a much harder time fitting in with Christian social groups; and cascades updates in ways that are, given the social realities of the United States and similar countries, likely to draw him toward other movements and behavior patterns, some of which are even more harmful than most denominations of Christianity, and away from the anthropological accumulations that correlate with Christianity, some of which may be harmful but some of which may be protecting against harms that aren’t obvious even to those with good epistemics. Oops! Is our rationalist winning?
To illustrate the general class of problem, let’s say you’re a space businessman, and your company is making a hundred space shekels every metric tick, and spending eighty space shekels every metric tick. You decide you want to make your company more profitable, and figure out that a good lower-order goal would be to increase its cash incoming. You implement a new plan, and within a few megaticks, your company is making four hundred space shekels every metric tick, and spending a thousand. Oops! You’ve increased your business’s cash incoming, but you’ve optimized for too low-order a goal, and now your business isn’t profitable anymore.
Now, as you’ve correctly pointed out, epistemic rationality is important because it’s important for instrumental rationality. But the thing we’re interested in is instrumental rationality, not epistemic rationality. If the instrumental benefits of being a Christian outweigh the instrumental harms of being a Christian, it’s instrumentally rational to be a Christian. If Christianity is false and it’s instrumentally rational to be a Christian, epistemic rationality conflicts with instrumental rationality.
This is the easy-to-summarize scaffolding of what I’ll call the conflict argument. It isn’t the argument itself—the proper form of the argument would require convincing examples of such a conflict, which of course this margin is too small to contain. In a sentence, it seems that there are a lot of complaints common in these parts—especially depression and lack of social ties—that are the precise opposites of instrumental benefits commonly attributed to religious participation. In more than a sentence, lambdaphagy’s Tumblr is probably the best place to start reading.
(I don’t mean to position this as the last word on the subject, of course—it’s just a summary of a post-Sequences development in parts of the rationalist world. It’s possible to either take this one step further and develop a new counterargument to the conflict argument or come up with an orthodox Sequencist response to it.)
Ironically, this sentence is epistemically true but instrumentally very dangerous.
See, to accurately assess which parts of epistemic rationality one should sacrifice for instrumental improvements requires a whole lot of epistemic rationality. And once you’ve made that sacrifice and lost some epistemic rationality, your capacity to make such trade-offs wisely in the future is severely impaired. But if you just focus on epistemic rationality, you can get quite a lot of winning as a side effect.
To bring it back to our example: it’s very dangerous to convince yourself that Jesus died for your sins just because you notice Christians have more friends. To do so you need to understand why believing in Jesus correlates with having friends. If you have a strong enough understanding of friendship and social structures for that, you can easily make friends and build a community without Jesus.
But if you install Jesus on your system you’re now left vulnerable to a lot of instrumentally bad things, with no guarantee that you’ll actually get the friends and community you wanted.
Assuming that the instrumental utility of religion can be separated from the religious parts is an old misconception. If all you need is a bit of sociological knowledge, shouldn’t it be possible to just engineer a cult of reason? Well, as it turns out, people have been trying for centuries, and it’s never really stuck. For one thing, there are, in startup terms, network effects. I’m not saying you should think of St. Paul as the Zuckerberg of Rome, but I’ve been to one of those churches where they dropped all the wacky supernatural stuff and I’d rather go to a meetup for GNU Social power users.
For another thing, it’s interesting that Eliezer Yudkowsky, who seems to be primarily interested in intellectual matters that relate to entities that are, while constrained by the rules of the universe, effectively all-knowing and all-powerful, and who cultivated interest in the mundane stuff out of the desire to get more people interested in said intellectual matters, seems to have gotten unusually far with the cult-of-reason project, at least so far.
Of course, if we think of LW as the seed of what could become a new religion (or at least a new philosophical scene, as world-spanning empires sometimes generate when they’re coming off a golden age—and didn’t Socrates have a thing or two to say about raising the sanity waterline?), this discussion would have to look a lot different, and ideally would be carried out in a smoke-filled room somewhere. You don’t want everyone in your society believing whatever nonsense will help them out with their social climbing, for reasons which I hope are obvious. (On the other hand, if we think of LW as the seed of what could become a new religion, its unusual antipathy to other religions—I haven’t seen anyone deploy the murder-Gandhi argument to explain why people shouldn’t do drugs or make tulpas—is an indisputable adaptive necessity. So there’s that.)
If, on the other hand, we think of LW as some people who are interested in instrumental rationality, the case has to be made that there’s at least fruit we can reach without becoming giraffes in grinding epistemic rationality. But most of us are shut-ins who read textbooks for fun, so how likely should we think it is that our keys are under the streetlight?
The murder-Gandhi argument against drugs is so common it has a name, “addiction.” Rationalists appear to me to have a perfectly rational level of concern about addiction (which means being less concerned about certain drugs, such as MDMA, and more concerned about other drugs, such as alcohol).
I am puzzled about how making tulpas could interfere with one’s ability to decide not to make any more tulpas.
The only explanation I caught wind of for the parking lot incident was that it had something to do with tulpamancy gone wrong. And I recall SSC attributing irreversible mental effects to hallucinogens and noting that a lot of the early proponents of hallucinogens ended up somewhat wacky.
But maybe it really does all work out such that the sorts of things that are popular in upper-middle-class urban twenty-something circles just aren’t anything to worry about, and the sorts of things that are unpopular in them (or worse, popular elsewhere) just are. What a coincidence!
Is your goal to have a small community of friends or to take over the world? The tightest-knit religions are the smaller and weirder ones, so if you want stronger social bonds you should join Scientology and not the Catholic church.
Or, you know, you can just go to a LessWrong meetup. I’ve been to one yesterday: we had cake, and wine, and we did a double crux discussion about rationality and self-improvement. I dare say that we’re getting at least half as much community benefit as the average church-goer, all for a modest investment of effort and without sacrificing our sanity.
If someone doesn’t have a social life because don’t leave their house, they should leave their house. The religious shut-ins who read the Bible for fun aren’t getting much social benefit either.
Rationality is a bad religion, but if you understand religions well enough you probably don’t need one.
One day I will have to write a longer text about this, but shortly: it is a false dilemma to see “small and tight-knit community” and “taking over the world” as mutually exclusive. Catholic church is not a small community, but it contains many small communities. It is an “eukaryotic” community, containing both the tight-knit subgroups and the masses of lukewarm believers, which together contribute to its long-term survival.
I would like to see the rationalist community to become “eukaryotic” in a similar way. In certain ways it already happens: we have people who work at MIRI and CFAR, we have people who participate at local meetups, we have people who debate online. This diversity is strength, not weakness: if you only have one mode of participation, then people who are unable to participate in that one specific way, are lost to the community.
The tricky part is keeping it all together. Preventing the tight-knit groups from excommunicating everyone else as “not real members”, but also preventing the lukewarm members from making it all about social interaction and abandoning the original purpose, because both of those are natural human tendencies.
One thing I’d like to see is more research into the effects of… if not secret societies, then at least societies of some sort.
For example, is it just a coincidence that Thiel and Musk, arguably the two most interesting public figures in the tech scene, are both Paypal Mafia?
Another good example is the Junto.
I imagine this could be tricky to research even if people wouldn’t try to obfuscate the reality (which they of course will). It would be difficult to distinguish “these two people conspired together” from “they are two extremely smart people, living in the same city, of course they are likely to have met each other”.
For example, in a small country with maybe five elite high schools, elite people of the same age have high probability to have been high-school classmates. If they later take over the world together, it would make a good story to claim that they already conspired to do that during the high school. Even if the real idea only came 20 years later, no one would believe it after some journalist finds out that actually they are former classmates.
So the information is likely to be skewed in both ways: not seeing connections where they are, and seeing meaningful connections in mere coincidences.
Small groups have a bigger problem: they won’t be very well documented. As far as I know, the only major source on the Junto is Ben Franklin’s autobiography, which I’ve already read.
Large groups, of course, have an entirely different problem: if they get an appreciable amount of power, conspiracy theorists will probably find out, and put out reams of garbage on them. I haven’t started trying to look into the history of the Freemasons yet because I’m not sure about the difficulty of telling garbage from useful history.
That makes more sense. Broadly, I agree with Jacobian here, but there are a few points I’d like to add.
First, it seems to me that there aren’t many situations in which this is actually the case. If you treat people decently (regardless of their religion or lack thereof), you are unlikely to lose friends for being atheist (especially if you don’t talk about it). Sure, don’t be a jerk and inappropriately impose your views on others, and don’t break it to your fundamentalist parents that you think religion is a sham. But situations where it would be instrumentally rational to believe falsely important things, the situations in which there really would be an expected net benefit even after factoring in the knock-on effects of making your epistemological slope just that bit more slippery, these situations seem constrained to “there’s an ASI who will torture me forever if I don’t consistently system-2 convince myself that god exists”. At worst, if you really can’t find other ways of socializing, keep going to church while internally keeping an accurate epistemology.
Second, I think you’re underestimating how quickly beliefs can grow their roots. For example, after reading Nate’s Dark Arts of Rationality, I made a carefully-weighed decision to adopt certain beliefs on a local level, even though I don’t believe them globally: “I can understand literally anything if I put my mind to it for enough time”, “I work twice as well while wearing shoes”, “I work twice as well while not wearing shoes” (the internal dialogue for adopting this one was pretty amusing), etc. After creating the local “shoe” belief and intensely locally-believing it, I zoomed out and focused on labelling it as globally-false. I was met with harsh resistance from thoughts already springing up to rationalize why my shoes actually could make me work harder. I had only believed this ridiculous thing for a few seconds, and my subconscious was already rushing to its defense. For this reason, I decided against globally-believing anything I know to be false, even though it may be “instrumentally rational” for me to always study as if I believe AGI is a mere two decades away. I am not yet strong enough to do this safely.
Third, I think this point of view underestimates the knock-on effects I mentioned earlier. Once you’ve crossed that bright line, once “instrumental rationality let me be Christian” is established, what else is left? Where is the Schelling fence for beliefs? I don’t know, but I think it’s better to be safe than sorry—especially in light of 1) and 2).
It should be noted that there are practically-secular jewish communities that seem to get a lot of the benefit of religion, without actually believing in supernatural things. I haven’t visited one of those myself, but friends who looked into it seemed to think they were doing pretty well on the epistemics front. So for people interested in religion, but not interested in the supernatural-believing stuff: Maybe joining a secular jewish community would be a good idea?
That does seem to be a popular option for people around here who have the right matrilineage for it.
It has to be correct and useful, and correctness only matters for winning inasmuch as it entails usefulness. Having a lot of correct information about golf is no good if you want to be a great chef.
Having correct object-level information and having a correct epistemological process and belief system are two different things. An incorrect epistemological process is likely to reject information it doesn’t like.
And having correct and relevant object-level information is a third thing.
Right, that’s a possible response: the sacrifice of epistemic rationality for instrumental rationality can’t be isolated. If your epistemic process leads to beneficial incorrect conclusions in one area, your epistemic process is broadly incorrect, and will necessarily lead to harmful incorrect conclusions elsewhere.
But people seem to be pretty good at compartmentalizing. Robert Aumann is an Orthodox Jew. (Which is the shoal that some early statements of the general-factor-of-correctness position broke on, IIRC.) And there are plenty of very instrumentally rational Christians in the world.
On the other hand, maybe people who’ve been exposed to all this epistemic talk won’t be so willing to compartmentalize—or at least to compartmentalize the sorts of things early LW used as examples of flaws in reasoning.
Which is why you shouldn’t have written “necessarily”.
I’m not sure how to square “rejecting religion is the preschool entrance exam of rationality” with “people are pretty good at compartmentalizing”. Certainly there are parts of the Sequences that imply the insignificance of compartmentalization.
I personally recall, maybe seven years ago, having to break the news to someone that Aumann is an Orthodox Jew. This was a big deal at the time! We tend to forget how different the rationalist consensus is from the contents of the Sequences.
Every once in a while someone asks me or someone I know about what “postrationality” is, and they’re never happy with the answer—“isn’t that just rationality?” Sure, to an extent; but to the extent that it is, it’s because “postrationality” won. And to tie this into the discussion elsewhere in the thread: postrationality mostly came out of a now-defunct secret society.
Your line of reasoning re: Aumann feels akin to “X billionaire dropped out of high school / college, ergo you can drop out, too”. Sure, perhaps some can get by with shoddy beliefs, but why disadvantage yourself?
Point of clarification: are you claiming that rejecting religion provides no information about someone’s rationality, or that it provides insignificant information?
If postrationality really did win, I don’t know that it should have. I haven’t been convinced that knowingly holding false beliefs is instrumentally rational in any realistic world, as I outlined below.
If people are pretty good at compartmentalization, it’s at least not immediately clear that there’s a disadvantage here.
It’s also not immediately clear that there’s a general factor of correctness, or, if there is, what the correctness distribution looks like.
It’s at least defensible position that there is a general factor of correctness, but that it isn’t useful, because it’s just an artifact of most people being pretty dumb, and there’s no general factor within the set of people who aren’t just pretty dumb. I do think there’s a general factor of not being pretty dumb, but I’m not sure about a general factor of correctness beyond that.
It seems probable that “ignore the people who are obviously pretty dumb” is a novel and worthwhile message for some people, but not for others. I grew up in a bubble where everyone already knew to do that, so it’s not for me, but maybe there are people who draw utility from being informed that they don’t have to take seriously genuine believers in astrology or homeopathy or whatever.
In a purely statistical sense, rejecting religion almost certainly provides information about someone’s rationality, because things tend to provide information about other things. Technically, demographics provide information about someone’s rationality. But not information that’s useful for updating about specific people.
Religious affiliation is a useful source of information about domain-specific rationality in areas that don’t lend themselves well to compartmentalization. There was a time when it made sense to discount the opinions of Mormon archaeologists about the New World, although now that they’ve been through some time of their religiously-motivated claims totally failing to pan out it probably lends itself to compartmentalization alright.
On the other hand, I wouldn’t discount the opinions of Mormon historical linguists about Proto-Uralic. But I would discount the opinions of astrologers about Proto-Uralic, unless they have other historical-linguistic work in areas that I can judge the quality of and that work seems reasonable.
Postrationality isn’t about knowingly holding false beliefs. Insofar as postrationality has a consensus that can be distilled to one sentence, it’s “you can’t kick everything upstairs to the slow system, so you should train the fast system.” But that’s a simplification.
I know that postrationality can’t be distilled to a single sentence and I’m picking on it a bit unfairly, but “post”-rationality can’t differentiate itself from rationality on that. Eliezer wrote about system 1 and system 2 in 2006:
And it’s not like this statement was ever controversial on LW.
You can’t get any more “core LW rationality” than the fricking Sequences. If someone thinks that rationality is about forcing everything into System 2 then, well, they should reread the fricking Sequences.
Minor: but I appreciate you using the word “fricking”, instead of the obvious alternative. For me, it feels like it gets the emphaticness across just as well, without the crudeness.
It’s hard to get Peterson second-hand. I recommend actually watching some of his lectures
While Peterson is a bit sloppy when he talks about truth, the notion of truth that he is working with is not simply his own construction to write some bottom line. There is a lot of literature of pragmatist analyses of truth and belief that roughly align with what he is saying and I would consider closer to what is the nature of truth (truer about truth) than the correspondence theory of truth presented in the sequences.
I recommend Peirce’s Making our Ideas Clear, Putnam’s Corresponding with Reality, and James’s The Will to Believe. Peirce and James can easily be found free online by searching and I can PM you Putnam if you want it.