Here’s an opinion on this that I haven’t seen voiced yet:
I have trouble being excited about the ‘rationalist community’ because it turns out it’s actually the “AI doomsday cult”, and never seems to get very far away from that.
As a person who thinks we have far bigger fish to fry than impending existential AI risk—like problems with how irrational most people everywhere (including us) are, or how divorced rationality is from our political discussions / collective decision making progress, or how climate change or war might destroy our relatively-peaceful global state before AI even exists—I find that I have little desire to try to contribute here. Being a member of this community seems to requiring buying into the AI-thing, and I don’t so I don’t feel like a member.
(I’m not saying that AI stuff shouldn’t be discussed. I’d like it to dominate the discussion a lot less.)
I think this community would have an easier time keeping members, not alienating potential members, and getting more useful discussion done, if the discussions were more located around rationality and effectiveness in general, instead of the esteemed founder’s pet obsession.
Being a member of this community seems to requiring buying into the AI-thing, and I don’t so I don’t feel like a member.
I don’t think that it’s true that you need to buy into the AI-thing to be a member of the community, and so I think that it seems that way is a problem.
But I think you do need to be able to buy into the non-weirdness of caring about the AI-thing, and that we may need to be somewhat explicit about the difference between those two things.
[This isn’t specific to AI; I think this holds for lots of positions. Cryonics is probably an easy one to point at that disproportionately many LWers endorse but is seen as deeply weird by society at large.]
Here’s an opinion on this that I haven’t seen voiced yet:
I have trouble being excited about the ‘rationalist community’ because it turns out it’s actually the “AI doomsday cult”, and never seems to get very far away from that.
As a person who thinks we have far bigger fish to fry than impending existential AI risk—like problems with how irrational most people everywhere (including us) are, or how divorced rationality is from our political discussions / collective decision making progress, or how climate change or war might destroy our relatively-peaceful global state before AI even exists—I find that I have little desire to try to contribute here. Being a member of this community seems to requiring buying into the AI-thing, and I don’t so I don’t feel like a member.
(I’m not saying that AI stuff shouldn’t be discussed. I’d like it to dominate the discussion a lot less.)
I think this community would have an easier time keeping members, not alienating potential members, and getting more useful discussion done, if the discussions were more located around rationality and effectiveness in general, instead of the esteemed founder’s pet obsession.
I don’t think that it’s true that you need to buy into the AI-thing to be a member of the community, and so I think that it seems that way is a problem.
But I think you do need to be able to buy into the non-weirdness of caring about the AI-thing, and that we may need to be somewhat explicit about the difference between those two things.
[This isn’t specific to AI; I think this holds for lots of positions. Cryonics is probably an easy one to point at that disproportionately many LWers endorse but is seen as deeply weird by society at large.]
If you’re going to downvote this, at least say why.
(Hm, I just learned that Lesswrong doesn’t let you delete comments? That’s strange.)