I think I disagree with some of the claims in this post and I’m mostly sympathetic with the points Akash raised in his comments. Relatedly, I’d like to see a more rigorous comparison between the AI safety community (especially EA/Rationality parts) and relevant reference class movements such as the climate change community.
That said, I think it’s reasonable to have a high prior on people ending up aiming for inappropriate levels of power-seeking when taking ambitious actions in the world so it’s important to keep these things in mind.
In addition to your two “recommendations” of focusing on legitimacy and competence, I’d add two additional candidates: 1. Being careful about what de facto role models or spokespeople the AI safety community “selects”. It seems crucial to avoid another SBF. 2. Enabling currently underrepresented perspectives to contribute in well-informed, competent ways.
I think I disagree with some of the claims in this post and I’m mostly sympathetic with the points Akash raised in his comments. Relatedly, I’d like to see a more rigorous comparison between the AI safety community (especially EA/Rationality parts) and relevant reference class movements such as the climate change community.
That said, I think it’s reasonable to have a high prior on people ending up aiming for inappropriate levels of power-seeking when taking ambitious actions in the world so it’s important to keep these things in mind.
In addition to your two “recommendations” of focusing on legitimacy and competence, I’d add two additional candidates:
1. Being careful about what de facto role models or spokespeople the AI safety community “selects”. It seems crucial to avoid another SBF.
2. Enabling currently underrepresented perspectives to contribute in well-informed, competent ways.