Many rationalists do follow something resembling the book’s advice.
CFAR started out with too much emphasis on lecturing people, but quickly noticed that wasn’t working, and pivoted to more emphasis on listening to people and making them feel comfortable. This is somewhat hard to see if you only know the rationalist movement via its online presence.
Eliezer is far from being the world’s best listener, and that likely contributed to some failures in promoting rationality. But he did attract and encourage people who overcame his shortcomings for CFAR’s in-person promotion of rationality.
I consider it pretty likely that CFAR’s influence has caused OpenAI to act more reasonably than it otherwise would act, due to several OpenAI employees having attended CFAR workshops.
It seems premature to conclude that rationalists have failed, or that OpenAI’s existence is bad.
Sorry, it doesn’t look like the conservatives have caught on to this kind of approach yet.
That’s not consistent with my experiences interacting with conservatives. (If you’re evaluating conservatives via broadcast online messages, I wouldn’t expect you to see anything more than tribal signaling).
It may be uncommon for conservatives to use effective approaches at explicitly changing political beliefs. That’s partly because politics are less central to conservative lives. You’d likely reach a more nuanced conclusion if you compare how Mormons persuade people to join their religion, which incidentally persuades people to become more conservative.
No. I found a claim of good results here. Beyond that I’m relying on vague impressions from very indirect sources, plus fictional evidence such as the movie Latter Days.
Fair enough, I haven’t interacted with CFAR at all. And the “rationalists have failed” framing is admittedly partly bait to keep you reading, partly parroting/interpreting how Yudkowsky appears to see his efforts towards AI Safety, and partly me projecting my own AI anxieties out there.
The Overton window around AI has also been shifting so quickly that this article may already be kind of outdated. (Although I think the core message is still strong.)
Someone else in the comments pointed out the religious proselytization angle, and yeah, I hadn’t thought about that, and apparently neither did David. That line was basically a throwaway joke lampshading how all the organizations discussed in the book are left-leaning, I don’t endorse it very strongly.
Many rationalists do follow something resembling the book’s advice.
CFAR started out with too much emphasis on lecturing people, but quickly noticed that wasn’t working, and pivoted to more emphasis on listening to people and making them feel comfortable. This is somewhat hard to see if you only know the rationalist movement via its online presence.
Eliezer is far from being the world’s best listener, and that likely contributed to some failures in promoting rationality. But he did attract and encourage people who overcame his shortcomings for CFAR’s in-person promotion of rationality.
I consider it pretty likely that CFAR’s influence has caused OpenAI to act more reasonably than it otherwise would act, due to several OpenAI employees having attended CFAR workshops.
It seems premature to conclude that rationalists have failed, or that OpenAI’s existence is bad.
That’s not consistent with my experiences interacting with conservatives. (If you’re evaluating conservatives via broadcast online messages, I wouldn’t expect you to see anything more than tribal signaling).
It may be uncommon for conservatives to use effective approaches at explicitly changing political beliefs. That’s partly because politics are less central to conservative lives. You’d likely reach a more nuanced conclusion if you compare how Mormons persuade people to join their religion, which incidentally persuades people to become more conservative.
Any source you would recommend to know more about the specific practices of Mormons you are referring to?
No. I found a claim of good results here. Beyond that I’m relying on vague impressions from very indirect sources, plus fictional evidence such as the movie Latter Days.
Fair enough, I haven’t interacted with CFAR at all. And the “rationalists have failed” framing is admittedly partly bait to keep you reading, partly parroting/interpreting how Yudkowsky appears to see his efforts towards AI Safety, and partly me projecting my own AI anxieties out there.
The Overton window around AI has also been shifting so quickly that this article may already be kind of outdated. (Although I think the core message is still strong.)
Someone else in the comments pointed out the religious proselytization angle, and yeah, I hadn’t thought about that, and apparently neither did David. That line was basically a throwaway joke lampshading how all the organizations discussed in the book are left-leaning, I don’t endorse it very strongly.