Fair enough, I haven’t interacted with CFAR at all. And the “rationalists have failed” framing is admittedly partly bait to keep you reading, partly parroting/interpreting how Yudkowsky appears to see his efforts towards AI Safety, and partly me projecting my own AI anxieties out there.
The Overton window around AI has also been shifting so quickly that this article may already be kind of outdated. (Although I think the core message is still strong.)
Someone else in the comments pointed out the religious proselytization angle, and yeah, I hadn’t thought about that, and apparently neither did David. That line was basically a throwaway joke lampshading how all the organizations discussed in the book are left-leaning, I don’t endorse it very strongly.
I don’t have an answer for you, you’ll have to chart your own path. I will say that I agree with your take on social media, it seems very peripheral-route-focused.
If you’re looking to do something practical on AI consider looking into a career counseling organization like 80000 Hours. From what I’ve seen, they fall into some of the traps I mentioned here (seems like they mostly think that trying to change people’s minds isn’t very valuable unless you actively want to become a political lobbyist) but they’re not bad overall for answering these kinds of questions.
Ultimately though, it’s your own life and you alone have to decide your path through it.