I think a point to remember that to a large extent, this dynamic is driven by the fact that there’s a sort of a winner take all effect where if your issue isn’t getting attention, this can be essentially the death knell of your movement due to the internet, and to be a little blunt, AI safety on Lesswrong was extraordinarily successful at getting attention, and due to some implicit/explicit claims that AI safety was way more important than any other issue, that meant to a certain extent, other issues like ethics and AI bias and their movements lost a lot of their potency and oxygen, and thus AI safety is predictably getting criticism about it distracting from their issues.
Cf habryka’s observation that strong action/shouting is basically the only way to get heard, otherwise the system of interest neutralizes the concern and continues on much like it did before.
Do you think climate change has sucked all the oxygen from health care reform? What about vice versa? Do you think the BLM movement sucked all the oxygen from civil asset forfeiture? If not, why not?
No in theses cases, mostly because they are independent movements, so we aren’t dealing with any potential conflict points. Very critically, no claim was made about the relative importance of each cause, which also reduces many of the frictions.
Even assuming AI safety people were right to imply that it’s cause was way more important than others, especially in public, this would probably make other people in the AI space rankle at it, and claim it was a distraction, because it means their projects are either less important than our projects, or at the very least it seems like their projects were less important than our projects.
There are also more general issues where sometimes one movement can break other movements with bad decisions, though.
As in the OP, I strongly agree with you that it’s a bad idea to go around saying “my cause is more important than your cause”. If anyone reading this right now is thinking “yeah maybe it pisses people off but it’s true pffffft”, then I would note that rationalist-sphere people bought into AI x-risk are nevertheless generally quite capable of caring about things that are not important compared to AI x-risk from a Cause Prioritization perspective, like YIMBY issues, how much the FDA sucks, the replication crisis, price gouging, etc., and if they got judged harshly for caring about FDA insanity when the future of the galaxy is at stake from AI, it would be pretty annoying, so by the same token they shouldn’t judge others harshly for caring about whatever causes they happen to care about. (But I’m strongly in favor of people (like me) who think AI x-risk is real and high trying to convince other people of that.)
I think a point to remember that to a large extent, this dynamic is driven by the fact that there’s a sort of a winner take all effect where if your issue isn’t getting attention, this can be essentially the death knell of your movement due to the internet, and to be a little blunt, AI safety on Lesswrong was extraordinarily successful at getting attention, and due to some implicit/explicit claims that AI safety was way more important than any other issue, that meant to a certain extent, other issues like ethics and AI bias and their movements lost a lot of their potency and oxygen, and thus AI safety is predictably getting criticism about it distracting from their issues.
Cf habryka’s observation that strong action/shouting is basically the only way to get heard, otherwise the system of interest neutralizes the concern and continues on much like it did before.
Do you think climate change has sucked all the oxygen from health care reform? What about vice versa? Do you think the BLM movement sucked all the oxygen from civil asset forfeiture? If not, why not?
No in theses cases, mostly because they are independent movements, so we aren’t dealing with any potential conflict points. Very critically, no claim was made about the relative importance of each cause, which also reduces many of the frictions.
Even assuming AI safety people were right to imply that it’s cause was way more important than others, especially in public, this would probably make other people in the AI space rankle at it, and claim it was a distraction, because it means their projects are either less important than our projects, or at the very least it seems like their projects were less important than our projects.
There are also more general issues where sometimes one movement can break other movements with bad decisions, though.
As in the OP, I strongly agree with you that it’s a bad idea to go around saying “my cause is more important than your cause”. If anyone reading this right now is thinking “yeah maybe it pisses people off but it’s true pffffft”, then I would note that rationalist-sphere people bought into AI x-risk are nevertheless generally quite capable of caring about things that are not important compared to AI x-risk from a Cause Prioritization perspective, like YIMBY issues, how much the FDA sucks, the replication crisis, price gouging, etc., and if they got judged harshly for caring about FDA insanity when the future of the galaxy is at stake from AI, it would be pretty annoying, so by the same token they shouldn’t judge others harshly for caring about whatever causes they happen to care about. (But I’m strongly in favor of people (like me) who think AI x-risk is real and high trying to convince other people of that.)