I don’t think it’s very insular as such. Many of the people here are highly active elsewhere as well. There are multiple organizations involving people here who are active in various communities, but this is not their hub for coordination. There are a few AI safety organizations, addressing existential risks is an EA cause area, and various other projects.
One major problem is that most people here—on the evidence of their posts—are terrible at marketing compared with their competence in other areas. This is also true in most other communities, but for various reasons the skill sets involved are especially visibly lacking here. (I absolutely include myself in this)
I wish they were just terrible at marketing. It’s worse. There’s an active anti-marketing, anti-rhetoric bias here. I think that in trying so hard to (rightfully!) escape from the corrosive effect of rampant signalling on most people’s epistemology, LessWrong has swung the pendulum the other way and become afraid of signalling in general. I think nothing short of a vehement marketing campaign aimed directly and specifically at groups on all sides of the political and even religious spectrum, across multiple countries (and not just English speaking ones—Europe, India, and China too), has any chance of improving the rationality of industrialized society enough to slow down AI advancement.
Also btw, when I say “this is an insular community” I mean the entire EA sphere. Most people on the planet with the resources to affect the world’s trajectory significantly have never heard of effective altruism or AI safety to begin with. That’s bad.
I don’t think it’s very insular as such. Many of the people here are highly active elsewhere as well. There are multiple organizations involving people here who are active in various communities, but this is not their hub for coordination. There are a few AI safety organizations, addressing existential risks is an EA cause area, and various other projects.
One major problem is that most people here—on the evidence of their posts—are terrible at marketing compared with their competence in other areas. This is also true in most other communities, but for various reasons the skill sets involved are especially visibly lacking here. (I absolutely include myself in this)
I wish they were just terrible at marketing. It’s worse. There’s an active anti-marketing, anti-rhetoric bias here. I think that in trying so hard to (rightfully!) escape from the corrosive effect of rampant signalling on most people’s epistemology, LessWrong has swung the pendulum the other way and become afraid of signalling in general. I think nothing short of a vehement marketing campaign aimed directly and specifically at groups on all sides of the political and even religious spectrum, across multiple countries (and not just English speaking ones—Europe, India, and China too), has any chance of improving the rationality of industrialized society enough to slow down AI advancement.
Also btw, when I say “this is an insular community” I mean the entire EA sphere. Most people on the planet with the resources to affect the world’s trajectory significantly have never heard of effective altruism or AI safety to begin with. That’s bad.