I wish they were just terrible at marketing. It’s worse. There’s an active anti-marketing, anti-rhetoric bias here. I think that in trying so hard to (rightfully!) escape from the corrosive effect of rampant signalling on most people’s epistemology, LessWrong has swung the pendulum the other way and become afraid of signalling in general. I think nothing short of a vehement marketing campaign aimed directly and specifically at groups on all sides of the political and even religious spectrum, across multiple countries (and not just English speaking ones—Europe, India, and China too), has any chance of improving the rationality of industrialized society enough to slow down AI advancement.
Also btw, when I say “this is an insular community” I mean the entire EA sphere. Most people on the planet with the resources to affect the world’s trajectory significantly have never heard of effective altruism or AI safety to begin with. That’s bad.
I wish they were just terrible at marketing. It’s worse. There’s an active anti-marketing, anti-rhetoric bias here. I think that in trying so hard to (rightfully!) escape from the corrosive effect of rampant signalling on most people’s epistemology, LessWrong has swung the pendulum the other way and become afraid of signalling in general. I think nothing short of a vehement marketing campaign aimed directly and specifically at groups on all sides of the political and even religious spectrum, across multiple countries (and not just English speaking ones—Europe, India, and China too), has any chance of improving the rationality of industrialized society enough to slow down AI advancement.
Also btw, when I say “this is an insular community” I mean the entire EA sphere. Most people on the planet with the resources to affect the world’s trajectory significantly have never heard of effective altruism or AI safety to begin with. That’s bad.