The argument that concerns about future AI risks distract from current AI problems does not make logical sense when analyzed directly, as concerns can complement each other rather than compete for attention.
The real motivation behind this argument may be an implicit competition over group status and political influence, with endorsements of certain advocates seen as wins or losses.
Advocates for AI safety and those for addressing current harms are not necessarily opposed and could find areas of agreement like interpretability issues.
AI safety advocates should avoid framing their work as more important than current problems or that resources should shift, as this can antagonize allies.
Both future risks and current harms deserve consideration and efforts to address them can occur simultaneously rather than as a false choice.
Concerns over future AI risks come from a diverse range of political ideologies, not just tech elites, showing it is not a partisan issue.
Cause prioritization aiming to quantify and compare issues can seem offensive but is intended to help efforts have the greatest positive impact.
Rationalists concerned with AI safety also care about other issues not as consequential, showing ability to support multiple related causes.
Framing debates as zero-sum competitions undermines potential for cooperation between groups with aligned interests.
Building understanding and alliances across different advocacy communities could help maximize progress on AI and its challenges.
https://www.lesswrong.com/posts/uA4Dmm4cWxcGyANAa/x-distracts-from-y-as-a-thinly-disguised-fight-over-group
The argument that concerns about future AI risks distract from current AI problems does not make logical sense when analyzed directly, as concerns can complement each other rather than compete for attention.
The real motivation behind this argument may be an implicit competition over group status and political influence, with endorsements of certain advocates seen as wins or losses.
Advocates for AI safety and those for addressing current harms are not necessarily opposed and could find areas of agreement like interpretability issues.
AI safety advocates should avoid framing their work as more important than current problems or that resources should shift, as this can antagonize allies.
Both future risks and current harms deserve consideration and efforts to address them can occur simultaneously rather than as a false choice.
Concerns over future AI risks come from a diverse range of political ideologies, not just tech elites, showing it is not a partisan issue.
Cause prioritization aiming to quantify and compare issues can seem offensive but is intended to help efforts have the greatest positive impact.
Rationalists concerned with AI safety also care about other issues not as consequential, showing ability to support multiple related causes.
Framing debates as zero-sum competitions undermines potential for cooperation between groups with aligned interests.
Building understanding and alliances across different advocacy communities could help maximize progress on AI and its challenges.