Have you been following the arguments about the Sequences? This issue has been covered fairly thoroughly over the last few years.
The problem, of course, is that the Sequences have been compiled in one place and heavily advertised as The Core of Rationality, whereas the arguments people have been having about the contents of the Sequences, the developments on top of their contents, the additions to the conceptual splinter canons that spun off of LW in the diaspora period, and so on aren’t terribly legible. So the null hypothesis is the contents of the Sequences, and until the contents of the years of argumentation that have gone on since the Sequences were posted are written up into new sequences, it’s necessary to continually try to come up with ad-hoc restatements of them—which is not a terribly heartening prospect.
Of course, the interpretations of the sacred texts will change over the years, even as the texts themselves remain the same. So: why does it matter if the map isn’t right in many areas? Is there a general factor of correctness, such that a map that’s wrong in one area can’t be trusted anywhere? Will benefits gained from errors in the map be more than balanced out by losses caused by the same errors? Or is it impossible to benefit from errors in the map at all?
A correct epistemological process is likely to assign very low likelihood to the proposition of Christianity being true at some point. Even if Christianity is true, most Christians don’t have good epistemics behind their Christianity; so if there exists an epistemically justifiable argument for ‘being a Christian’, our hypothetical cradle-Christian rationalist is likely to reach the necessary epistemic skill level to see through the Christian apologetics he’s inherited before he discovers it.
At which point he starts sleeping in on Sundays; loses the social capital he’s accumulated through church; has a much harder time fitting in with Christian social groups; and cascades updates in ways that are, given the social realities of the United States and similar countries, likely to draw him toward other movements and behavior patterns, some of which are even more harmful than most denominations of Christianity, and away from the anthropological accumulations that correlate with Christianity, some of which may be harmful but some of which may be protecting against harms that aren’t obvious even to those with good epistemics. Oops! Is our rationalist winning?
To illustrate the general class of problem, let’s say you’re a space businessman, and your company is making a hundred space shekels every metric tick, and spending eighty space shekels every metric tick. You decide you want to make your company more profitable, and figure out that a good lower-order goal would be to increase its cash incoming. You implement a new plan, and within a few megaticks, your company is making four hundred space shekels every metric tick, and spending a thousand. Oops! You’ve increased your business’s cash incoming, but you’ve optimized for too low-order a goal, and now your business isn’t profitable anymore.
Now, as you’ve correctly pointed out, epistemic rationality is important because it’s important for instrumental rationality. But the thing we’re interested in is instrumental rationality, not epistemic rationality. If the instrumental benefits of being a Christian outweigh the instrumental harms of being a Christian, it’s instrumentally rational to be a Christian. If Christianity is false and it’s instrumentally rational to be a Christian, epistemic rationality conflicts with instrumental rationality.
This is the easy-to-summarize scaffolding of what I’ll call the conflict argument. It isn’t the argument itself—the proper form of the argument would require convincing examples of such a conflict, which of course this margin is too small to contain. In a sentence, it seems that there are a lot of complaints common in these parts—especially depression and lack of social ties—that are the precise opposites of instrumental benefits commonly attributed to religious participation. In more than a sentence, lambdaphagy’s Tumblr is probably the best place to start reading.
(I don’t mean to position this as the last word on the subject, of course—it’s just a summary of a post-Sequences development in parts of the rationalist world. It’s possible to either take this one step further and develop a new counterargument to the conflict argument or come up with an orthodox Sequencist response to it.)
But the thing we’re interested in is instrumental rationality, not epistemic rationality.
Ironically, this sentence is epistemically true but instrumentally very dangerous.
See, to accurately assess which parts of epistemic rationality one should sacrifice for instrumental improvements requires a whole lot of epistemic rationality. And once you’ve made that sacrifice and lost some epistemic rationality, your capacity to make such trade-offs wisely in the future is severely impaired. But if you just focus on epistemic rationality, you can get quite a lot of winning as a side effect.
To bring it back to our example: it’s very dangerous to convince yourself that Jesus died for your sins just because you notice Christians have more friends. To do so you need to understand why believing in Jesus correlates with having friends. If you have a strong enough understanding of friendship and social structures for that, you can easily make friends and build a community without Jesus.
But if you install Jesus on your system you’re now left vulnerable to a lot of instrumentally bad things, with no guarantee that you’ll actually get the friends and community you wanted.
Assuming that the instrumental utility of religion can be separated from the religious parts is an old misconception. If all you need is a bit of sociological knowledge, shouldn’t it be possible to just engineer a cult of reason? Well, as it turns out, people have been trying for centuries, and it’s never really stuck. For one thing, there are, in startup terms, network effects. I’m not saying you should think of St. Paul as the Zuckerberg of Rome, but I’ve been to one of those churches where they dropped all the wacky supernatural stuff and I’d rather go to a meetup for GNU Social power users.
For another thing, it’s interesting that Eliezer Yudkowsky, who seems to be primarily interested in intellectual matters that relate to entities that are, while constrained by the rules of the universe, effectively all-knowing and all-powerful, and who cultivated interest in the mundane stuff out of the desire to get more people interested in said intellectual matters, seems to have gotten unusually far with the cult-of-reason project, at least so far.
Of course, if we think of LW as the seed of what could become a new religion (or at least a new philosophical scene, as world-spanning empires sometimes generate when they’re coming off a golden age—and didn’t Socrates have a thing or two to say about raising the sanity waterline?), this discussion would have to look a lot different, and ideally would be carried out in a smoke-filled room somewhere. You don’t want everyone in your society believing whatever nonsense will help them out with their social climbing, for reasons which I hope are obvious. (On the other hand, if we think of LW as the seed of what could become a new religion, its unusual antipathy to other religions—I haven’t seen anyone deploy the murder-Gandhi argument to explain why people shouldn’t do drugs or make tulpas—is an indisputable adaptive necessity. So there’s that.)
If, on the other hand, we think of LW as some people who are interested in instrumental rationality, the case has to be made that there’s at least fruit we can reach without becoming giraffes in grinding epistemic rationality. But most of us are shut-ins who read textbooks for fun, so how likely should we think it is that our keys are under the streetlight?
its unusual antipathy to other religions—I haven’t seen anyone deploy the murder-Gandhi argument to explain why people shouldn’t do drugs or make tulpas
The murder-Gandhi argument against drugs is so common it has a name, “addiction.” Rationalists appear to me to have a perfectly rational level of concern about addiction (which means being less concerned about certain drugs, such as MDMA, and more concerned about other drugs, such as alcohol).
I am puzzled about how making tulpas could interfere with one’s ability to decide not to make any more tulpas.
The only explanation I caught wind of for the parking lot incident was that it had something to do with tulpamancy gone wrong. And I recall SSC attributing irreversible mental effects to hallucinogens and noting that a lot of the early proponents of hallucinogens ended up somewhat wacky.
But maybe it really does all work out such that the sorts of things that are popular in upper-middle-class urban twenty-something circles just aren’t anything to worry about, and the sorts of things that are unpopular in them (or worse, popular elsewhere) just are. What a coincidence!
Is your goal to have a small community of friends or to take over the world? The tightest-knit religions are the smaller and weirder ones, so if you want stronger social bonds you should join Scientology and not the Catholic church.
Or, you know, you can just go to a LessWrong meetup. I’ve been to one yesterday: we had cake, and wine, and we did a double crux discussion about rationality and self-improvement. I dare say that we’re getting at least half as much community benefit as the average church-goer, all for a modest investment of effort and without sacrificing our sanity.
If someone doesn’t have a social life because don’t leave their house, they should leave their house. The religious shut-ins who read the Bible for fun aren’t getting much social benefit either.
One day I will have to write a longer text about this, but shortly: it is a false dilemma to see “small and tight-knit community” and “taking over the world” as mutually exclusive. Catholic church is not a small community, but it contains many small communities. It is an “eukaryotic” community, containing both the tight-knit subgroups and the masses of lukewarm believers, which together contribute to its long-term survival.
I would like to see the rationalist community to become “eukaryotic” in a similar way. In certain ways it already happens: we have people who work at MIRI and CFAR, we have people who participate at local meetups, we have people who debate online. This diversity is strength, not weakness: if you only have one mode of participation, then people who are unable to participate in that one specific way, are lost to the community.
The tricky part is keeping it all together. Preventing the tight-knit groups from excommunicating everyone else as “not real members”, but also preventing the lukewarm members from making it all about social interaction and abandoning the original purpose, because both of those are natural human tendencies.
I imagine this could be tricky to research even if people wouldn’t try to obfuscate the reality (which they of course will). It would be difficult to distinguish “these two people conspired together” from “they are two extremely smart people, living in the same city, of course they are likely to have met each other”.
For example, in a small country with maybe five elite high schools, elite people of the same age have high probability to have been high-school classmates. If they later take over the world together, it would make a good story to claim that they already conspired to do that during the high school. Even if the real idea only came 20 years later, no one would believe it after some journalist finds out that actually they are former classmates.
So the information is likely to be skewed in both ways: not seeing connections where they are, and seeing meaningful connections in mere coincidences.
Small groups have a bigger problem: they won’t be very well documented. As far as I know, the only major source on the Junto is Ben Franklin’s autobiography, which I’ve already read.
Large groups, of course, have an entirely different problem: if they get an appreciable amount of power, conspiracy theorists will probably find out, and put out reams of garbage on them. I haven’t started trying to look into the history of the Freemasons yet because I’m not sure about the difficulty of telling garbage from useful history.
That makes more sense. Broadly, I agree with Jacobian here, but there are a few points I’d like to add.
First, it seems to me that there aren’t many situations in which this is actually the case. If you treat people decently (regardless of their religion or lack thereof), you are unlikely to lose friends for being atheist (especially if you don’t talk about it). Sure, don’t be a jerk and inappropriately impose your views on others, and don’t break it to your fundamentalist parents that you think religion is a sham. But situations where it would be instrumentally rational to believe falsely important things, the situations in which there really would be an expected net benefit even after factoring in the knock-on effects of making your epistemological slope just that bit more slippery, these situations seem constrained to “there’s an ASI who will torture me forever if I don’t consistently system-2 convince myself that god exists”. At worst, if you really can’t find other ways of socializing, keep going to church while internally keeping an accurate epistemology.
Second, I think you’re underestimating how quickly beliefs can grow their roots. For example, after reading Nate’s Dark Arts of Rationality, I made a carefully-weighed decision to adopt certain beliefs on a local level, even though I don’t believe them globally: “I can understand literallyanything if I put my mind to it for enough time”, “I work twice as well while wearing shoes”, “I work twice as well while not wearing shoes” (the internal dialogue for adopting this one was pretty amusing), etc. After creating the local “shoe” belief and intensely locally-believing it, I zoomed out and focused on labelling it as globally-false. I was met with harsh resistance from thoughts already springing up to rationalize why my shoes actually could make me work harder. I had only believed this ridiculous thing for a few seconds, and my subconscious was already rushing to its defense. For this reason, I decided against globally-believing anything I know to be false, even though it may be “instrumentally rational” for me to always study as if I believe AGI is a mere two decades away. I am not yet strong enough to do this safely.
Third, I think this point of view underestimates the knock-on effects I mentioned earlier. Once you’ve crossed that bright line, once “instrumental rationality let me be Christian” is established, what else is left? Where is the Schelling fence for beliefs? I don’t know, but I think it’s better to be safe than sorry—especially in light of 1) and 2).
It should be noted that there are practically-secular jewish communities that seem to get a lot of the benefit of religion, without actually believing in supernatural things. I haven’t visited one of those myself, but friends who looked into it seemed to think they were doing pretty well on the epistemics front. So for people interested in religion, but not interested in the supernatural-believing stuff: Maybe joining a secular jewish community would be a good idea?
Have you been following the arguments about the Sequences? This issue has been covered fairly thoroughly over the last few years.
The problem, of course, is that the Sequences have been compiled in one place and heavily advertised as The Core of Rationality, whereas the arguments people have been having about the contents of the Sequences, the developments on top of their contents, the additions to the conceptual splinter canons that spun off of LW in the diaspora period, and so on aren’t terribly legible. So the null hypothesis is the contents of the Sequences, and until the contents of the years of argumentation that have gone on since the Sequences were posted are written up into new sequences, it’s necessary to continually try to come up with ad-hoc restatements of them—which is not a terribly heartening prospect.
Of course, the interpretations of the sacred texts will change over the years, even as the texts themselves remain the same. So: why does it matter if the map isn’t right in many areas? Is there a general factor of correctness, such that a map that’s wrong in one area can’t be trusted anywhere? Will benefits gained from errors in the map be more than balanced out by losses caused by the same errors? Or is it impossible to benefit from errors in the map at all?
No, I’m fairly new. Thanks for the background.
What would the benefits be of “unrejecting” Christianity, and what would that entail? I’d like to understand your last point a little better.
A correct epistemological process is likely to assign very low likelihood to the proposition of Christianity being true at some point. Even if Christianity is true, most Christians don’t have good epistemics behind their Christianity; so if there exists an epistemically justifiable argument for ‘being a Christian’, our hypothetical cradle-Christian rationalist is likely to reach the necessary epistemic skill level to see through the Christian apologetics he’s inherited before he discovers it.
At which point he starts sleeping in on Sundays; loses the social capital he’s accumulated through church; has a much harder time fitting in with Christian social groups; and cascades updates in ways that are, given the social realities of the United States and similar countries, likely to draw him toward other movements and behavior patterns, some of which are even more harmful than most denominations of Christianity, and away from the anthropological accumulations that correlate with Christianity, some of which may be harmful but some of which may be protecting against harms that aren’t obvious even to those with good epistemics. Oops! Is our rationalist winning?
To illustrate the general class of problem, let’s say you’re a space businessman, and your company is making a hundred space shekels every metric tick, and spending eighty space shekels every metric tick. You decide you want to make your company more profitable, and figure out that a good lower-order goal would be to increase its cash incoming. You implement a new plan, and within a few megaticks, your company is making four hundred space shekels every metric tick, and spending a thousand. Oops! You’ve increased your business’s cash incoming, but you’ve optimized for too low-order a goal, and now your business isn’t profitable anymore.
Now, as you’ve correctly pointed out, epistemic rationality is important because it’s important for instrumental rationality. But the thing we’re interested in is instrumental rationality, not epistemic rationality. If the instrumental benefits of being a Christian outweigh the instrumental harms of being a Christian, it’s instrumentally rational to be a Christian. If Christianity is false and it’s instrumentally rational to be a Christian, epistemic rationality conflicts with instrumental rationality.
This is the easy-to-summarize scaffolding of what I’ll call the conflict argument. It isn’t the argument itself—the proper form of the argument would require convincing examples of such a conflict, which of course this margin is too small to contain. In a sentence, it seems that there are a lot of complaints common in these parts—especially depression and lack of social ties—that are the precise opposites of instrumental benefits commonly attributed to religious participation. In more than a sentence, lambdaphagy’s Tumblr is probably the best place to start reading.
(I don’t mean to position this as the last word on the subject, of course—it’s just a summary of a post-Sequences development in parts of the rationalist world. It’s possible to either take this one step further and develop a new counterargument to the conflict argument or come up with an orthodox Sequencist response to it.)
Ironically, this sentence is epistemically true but instrumentally very dangerous.
See, to accurately assess which parts of epistemic rationality one should sacrifice for instrumental improvements requires a whole lot of epistemic rationality. And once you’ve made that sacrifice and lost some epistemic rationality, your capacity to make such trade-offs wisely in the future is severely impaired. But if you just focus on epistemic rationality, you can get quite a lot of winning as a side effect.
To bring it back to our example: it’s very dangerous to convince yourself that Jesus died for your sins just because you notice Christians have more friends. To do so you need to understand why believing in Jesus correlates with having friends. If you have a strong enough understanding of friendship and social structures for that, you can easily make friends and build a community without Jesus.
But if you install Jesus on your system you’re now left vulnerable to a lot of instrumentally bad things, with no guarantee that you’ll actually get the friends and community you wanted.
Assuming that the instrumental utility of religion can be separated from the religious parts is an old misconception. If all you need is a bit of sociological knowledge, shouldn’t it be possible to just engineer a cult of reason? Well, as it turns out, people have been trying for centuries, and it’s never really stuck. For one thing, there are, in startup terms, network effects. I’m not saying you should think of St. Paul as the Zuckerberg of Rome, but I’ve been to one of those churches where they dropped all the wacky supernatural stuff and I’d rather go to a meetup for GNU Social power users.
For another thing, it’s interesting that Eliezer Yudkowsky, who seems to be primarily interested in intellectual matters that relate to entities that are, while constrained by the rules of the universe, effectively all-knowing and all-powerful, and who cultivated interest in the mundane stuff out of the desire to get more people interested in said intellectual matters, seems to have gotten unusually far with the cult-of-reason project, at least so far.
Of course, if we think of LW as the seed of what could become a new religion (or at least a new philosophical scene, as world-spanning empires sometimes generate when they’re coming off a golden age—and didn’t Socrates have a thing or two to say about raising the sanity waterline?), this discussion would have to look a lot different, and ideally would be carried out in a smoke-filled room somewhere. You don’t want everyone in your society believing whatever nonsense will help them out with their social climbing, for reasons which I hope are obvious. (On the other hand, if we think of LW as the seed of what could become a new religion, its unusual antipathy to other religions—I haven’t seen anyone deploy the murder-Gandhi argument to explain why people shouldn’t do drugs or make tulpas—is an indisputable adaptive necessity. So there’s that.)
If, on the other hand, we think of LW as some people who are interested in instrumental rationality, the case has to be made that there’s at least fruit we can reach without becoming giraffes in grinding epistemic rationality. But most of us are shut-ins who read textbooks for fun, so how likely should we think it is that our keys are under the streetlight?
The murder-Gandhi argument against drugs is so common it has a name, “addiction.” Rationalists appear to me to have a perfectly rational level of concern about addiction (which means being less concerned about certain drugs, such as MDMA, and more concerned about other drugs, such as alcohol).
I am puzzled about how making tulpas could interfere with one’s ability to decide not to make any more tulpas.
The only explanation I caught wind of for the parking lot incident was that it had something to do with tulpamancy gone wrong. And I recall SSC attributing irreversible mental effects to hallucinogens and noting that a lot of the early proponents of hallucinogens ended up somewhat wacky.
But maybe it really does all work out such that the sorts of things that are popular in upper-middle-class urban twenty-something circles just aren’t anything to worry about, and the sorts of things that are unpopular in them (or worse, popular elsewhere) just are. What a coincidence!
Is your goal to have a small community of friends or to take over the world? The tightest-knit religions are the smaller and weirder ones, so if you want stronger social bonds you should join Scientology and not the Catholic church.
Or, you know, you can just go to a LessWrong meetup. I’ve been to one yesterday: we had cake, and wine, and we did a double crux discussion about rationality and self-improvement. I dare say that we’re getting at least half as much community benefit as the average church-goer, all for a modest investment of effort and without sacrificing our sanity.
If someone doesn’t have a social life because don’t leave their house, they should leave their house. The religious shut-ins who read the Bible for fun aren’t getting much social benefit either.
Rationality is a bad religion, but if you understand religions well enough you probably don’t need one.
One day I will have to write a longer text about this, but shortly: it is a false dilemma to see “small and tight-knit community” and “taking over the world” as mutually exclusive. Catholic church is not a small community, but it contains many small communities. It is an “eukaryotic” community, containing both the tight-knit subgroups and the masses of lukewarm believers, which together contribute to its long-term survival.
I would like to see the rationalist community to become “eukaryotic” in a similar way. In certain ways it already happens: we have people who work at MIRI and CFAR, we have people who participate at local meetups, we have people who debate online. This diversity is strength, not weakness: if you only have one mode of participation, then people who are unable to participate in that one specific way, are lost to the community.
The tricky part is keeping it all together. Preventing the tight-knit groups from excommunicating everyone else as “not real members”, but also preventing the lukewarm members from making it all about social interaction and abandoning the original purpose, because both of those are natural human tendencies.
One thing I’d like to see is more research into the effects of… if not secret societies, then at least societies of some sort.
For example, is it just a coincidence that Thiel and Musk, arguably the two most interesting public figures in the tech scene, are both Paypal Mafia?
Another good example is the Junto.
I imagine this could be tricky to research even if people wouldn’t try to obfuscate the reality (which they of course will). It would be difficult to distinguish “these two people conspired together” from “they are two extremely smart people, living in the same city, of course they are likely to have met each other”.
For example, in a small country with maybe five elite high schools, elite people of the same age have high probability to have been high-school classmates. If they later take over the world together, it would make a good story to claim that they already conspired to do that during the high school. Even if the real idea only came 20 years later, no one would believe it after some journalist finds out that actually they are former classmates.
So the information is likely to be skewed in both ways: not seeing connections where they are, and seeing meaningful connections in mere coincidences.
Small groups have a bigger problem: they won’t be very well documented. As far as I know, the only major source on the Junto is Ben Franklin’s autobiography, which I’ve already read.
Large groups, of course, have an entirely different problem: if they get an appreciable amount of power, conspiracy theorists will probably find out, and put out reams of garbage on them. I haven’t started trying to look into the history of the Freemasons yet because I’m not sure about the difficulty of telling garbage from useful history.
That makes more sense. Broadly, I agree with Jacobian here, but there are a few points I’d like to add.
First, it seems to me that there aren’t many situations in which this is actually the case. If you treat people decently (regardless of their religion or lack thereof), you are unlikely to lose friends for being atheist (especially if you don’t talk about it). Sure, don’t be a jerk and inappropriately impose your views on others, and don’t break it to your fundamentalist parents that you think religion is a sham. But situations where it would be instrumentally rational to believe falsely important things, the situations in which there really would be an expected net benefit even after factoring in the knock-on effects of making your epistemological slope just that bit more slippery, these situations seem constrained to “there’s an ASI who will torture me forever if I don’t consistently system-2 convince myself that god exists”. At worst, if you really can’t find other ways of socializing, keep going to church while internally keeping an accurate epistemology.
Second, I think you’re underestimating how quickly beliefs can grow their roots. For example, after reading Nate’s Dark Arts of Rationality, I made a carefully-weighed decision to adopt certain beliefs on a local level, even though I don’t believe them globally: “I can understand literally anything if I put my mind to it for enough time”, “I work twice as well while wearing shoes”, “I work twice as well while not wearing shoes” (the internal dialogue for adopting this one was pretty amusing), etc. After creating the local “shoe” belief and intensely locally-believing it, I zoomed out and focused on labelling it as globally-false. I was met with harsh resistance from thoughts already springing up to rationalize why my shoes actually could make me work harder. I had only believed this ridiculous thing for a few seconds, and my subconscious was already rushing to its defense. For this reason, I decided against globally-believing anything I know to be false, even though it may be “instrumentally rational” for me to always study as if I believe AGI is a mere two decades away. I am not yet strong enough to do this safely.
Third, I think this point of view underestimates the knock-on effects I mentioned earlier. Once you’ve crossed that bright line, once “instrumental rationality let me be Christian” is established, what else is left? Where is the Schelling fence for beliefs? I don’t know, but I think it’s better to be safe than sorry—especially in light of 1) and 2).
It should be noted that there are practically-secular jewish communities that seem to get a lot of the benefit of religion, without actually believing in supernatural things. I haven’t visited one of those myself, but friends who looked into it seemed to think they were doing pretty well on the epistemics front. So for people interested in religion, but not interested in the supernatural-believing stuff: Maybe joining a secular jewish community would be a good idea?
That does seem to be a popular option for people around here who have the right matrilineage for it.