Instead of creating new ingroups and outgroups and tribal signifiers for enforcing such, we should focus on careful truth-seeking. Some mythologies and terms that engage our more primal instincts can be useful, like when Scott introduced “Moloch”, but others are much more likely harmful. “Orthodox vs Reform” seems like a purely harmful one, that is only useful for enforcing further division and tribal warfare.
To summarize, in this post Aaronson,
Enforces the idea that AI safety is religion.
Creates new terminology to try and split the field into two ideological groups.
Chooses terminology that paints the other one as “old, conforming, traditional” and the other as, to quote wiktionary, “the change of something that is defective, broken, inefficient or otherwise negative”.
Immediately adopts the tribal framing he has created, and already identifies as “We Reform AI-riskers”, to quote him directly.
This seems like a powerful approach for waging ideological warfare. It is not constructive for truth-seeking.
I doubt that it’s actually good to have this strong division, and it might have positive EV to try to move into a more cooperative direction, and try to lower the temperature and divisiveness.
Instead of creating new ingroups and outgroups and tribal signifiers for enforcing such, we should focus on careful truth-seeking. Some mythologies and terms that engage our more primal instincts can be useful, like when Scott introduced “Moloch”, but others are much more likely harmful. “Orthodox vs Reform” seems like a purely harmful one, that is only useful for enforcing further division and tribal warfare.
To summarize, in this post Aaronson,
Enforces the idea that AI safety is religion.
Creates new terminology to try and split the field into two ideological groups.
Chooses terminology that paints the other one as “old, conforming, traditional” and the other as, to quote wiktionary, “the change of something that is defective, broken, inefficient or otherwise negative”.
Immediately adopts the tribal framing he has created, and already identifies as “We Reform AI-riskers”, to quote him directly.
This seems like a powerful approach for waging ideological warfare. It is not constructive for truth-seeking.
This is wonderfully written, thank you.
I do worry that “further division and tribal warfare” seems the default, unless there’s an active effort at reconciliation.
Sam Altman (CEO of OpenAI) tweets things like:
LessWrong compares OpenAI with Phillip Morris and, in general, seems very critical of OpenAI. “I’ve seen a lot more public criticism lately”.
I doubt that it’s actually good to have this strong division, and it might have positive EV to try to move into a more cooperative direction, and try to lower the temperature and divisiveness.