I find my personal experience in accord with the evidence from Vladimir’s thread. I’ve gotten countless clarity-of-mind benefits from Overcoming Bias’ x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can’t think of any.
Well, it did ultimately help you make SlateStarCodex and Astral Codex Ten successful, which provided a haven for non-extremist thought to thousands of people. And since the latter earned hundreds of thousands in annual revenue, you were able to create the ACX grants program that will probably make the world a better place in various small ways. Plus, people will look back on you as one of the world’s most influential philosophers.
As for me, I hope to make various EA proposals informed by rationalist thought, especially relating to the cause area of “improving human intellectual efficiency”, e.g. maybe sort of evidence-Wikipedia. Mind you, the seeds of this idea came before I discovered rationalism, and “Rationality A-Z” was, for me, just adding clarity to a worldview I already already developed vaguely in my head.
But yes, I’m finding that rationalism isn’t much use unless you can spread it to others. My former legal guardian recently died with Covid, but my anti-vax father believes he was killed by a stroke that coincidentally happened at the same time as his Covid infection. Sending him a copy of Scout Mindset was a notably ineffective tactic; in fact, it turned out that one month before I sent him the book, the host and founder of his favorite source of anti-vax information, Daystar, died of Covid. There’s conviction—and then there’s my dad. My skill at persuading this kind of person is virtually zero even though I have lots of experience attempting it, and I think the reason is that I have never been the kind of person that these people are, so my sense of empathy fails me, and I do not know how to model them. Evidence has no effect, and raising them into a meta-level of discussion is extremely hard at best. Winnifred Louis suggests (among other things) that people need to hear messages from their own tribe, and obviously “family” is not the relevant tribe in this case! So one of the first things I sent him was pro-vax messaging from the big man himself, Donald Trump… I’m not sure he even read that email, though (he has trouble with modern technology, hence the paper copy of Scout Mindset).
Anyway, while human brain plasticity isn’t what we might like it to be, new generations are being born all the time, and I think on the whole, you and this community have been successful at spreading rationalist philosophy, and it is starting to become clear that this is having an effect on the broader world, particularly on the EA side of things. This makes sense! LessWrong is focused on epistemic rationality and not so much on instrumental rationality, while the EA community is focused on action; drawing accurate maps of reality isn’t useful until somebody does something with those maps. And while the EA community is not focused on epistemic rationality, many of its leaders are familiar with the most common ideas from LessWorng, and so rationalism is indeed making its mark on the world.
I think a key problem with early rationalist thought is a lack of regard for coordination and communities. Single humans are small, slow, and intellectually limited, so there is little that a single human can do with rationalism all by zimself. Yudkowsky envisioned “rationality dojos” where individuals would individually strengthen their rationality—which is okay I guess—but he didn’t present a vision of how to solve coordination problems, or how to construct large new communities and systems guided in their design by nuanced rational thought. Are we starting to look at such things more seriously these days? I like to think so.
Well, it did ultimately help you make SlateStarCodex and Astral Codex Ten successful, which provided a haven for non-extremist thought to thousands of people. And since the latter earned hundreds of thousands in annual revenue, you were able to create the ACX grants program that will probably make the world a better place in various small ways. Plus, people will look back on you as one of the world’s most influential philosophers.
As for me, I hope to make various EA proposals informed by rationalist thought, especially relating to the cause area of “improving human intellectual efficiency”, e.g. maybe sort of evidence-Wikipedia. Mind you, the seeds of this idea came before I discovered rationalism, and “Rationality A-Z” was, for me, just adding clarity to a worldview I already already developed vaguely in my head.
But yes, I’m finding that rationalism isn’t much use unless you can spread it to others. My former legal guardian recently died with Covid, but my anti-vax father believes he was killed by a stroke that coincidentally happened at the same time as his Covid infection. Sending him a copy of Scout Mindset was a notably ineffective tactic; in fact, it turned out that one month before I sent him the book, the host and founder of his favorite source of anti-vax information, Daystar, died of Covid. There’s conviction—and then there’s my dad. My skill at persuading this kind of person is virtually zero even though I have lots of experience attempting it, and I think the reason is that I have never been the kind of person that these people are, so my sense of empathy fails me, and I do not know how to model them. Evidence has no effect, and raising them into a meta-level of discussion is extremely hard at best. Winnifred Louis suggests (among other things) that people need to hear messages from their own tribe, and obviously “family” is not the relevant tribe in this case! So one of the first things I sent him was pro-vax messaging from the big man himself, Donald Trump… I’m not sure he even read that email, though (he has trouble with modern technology, hence the paper copy of Scout Mindset).
Anyway, while human brain plasticity isn’t what we might like it to be, new generations are being born all the time, and I think on the whole, you and this community have been successful at spreading rationalist philosophy, and it is starting to become clear that this is having an effect on the broader world, particularly on the EA side of things. This makes sense! LessWrong is focused on epistemic rationality and not so much on instrumental rationality, while the EA community is focused on action; drawing accurate maps of reality isn’t useful until somebody does something with those maps. And while the EA community is not focused on epistemic rationality, many of its leaders are familiar with the most common ideas from LessWorng, and so rationalism is indeed making its mark on the world.
I think a key problem with early rationalist thought is a lack of regard for coordination and communities. Single humans are small, slow, and intellectually limited, so there is little that a single human can do with rationalism all by zimself. Yudkowsky envisioned “rationality dojos” where individuals would individually strengthen their rationality—which is okay I guess—but he didn’t present a vision of how to solve coordination problems, or how to construct large new communities and systems guided in their design by nuanced rational thought. Are we starting to look at such things more seriously these days? I like to think so.