I mostly agree with this post, but while I do think that the AI safety movement probably should try to at least be more cooperative with other movements, I disagree with the claim in the comments section that AI safety shouldn’t try to pick a political fight in the future around open-source.
(I agree it probably picked that fight too early.)
The reason is that there’s a non-trivial chance that alignment is plausibly solvable for human-level AI systems ala AI control, even if they are scheming, so long as the lab has control over the AIs, which as a corollary also means you can’t open-source/open-weights the model.
More prosaically, AI misuse can be a problem, and the most important point here is that open-source/open-weighting the model widens the set of people who can change the AI, which unfortunately also means that there is a larger and larger chance for misuse with more people that know how to change the AI.
So I do think there’s a non-trivial chance that AI safety eventually will have to suffer political costs to ban/severely restrict open-sourcing AI.
I mostly agree with this post, but while I do think that the AI safety movement probably should try to at least be more cooperative with other movements, I disagree with the claim in the comments section that AI safety shouldn’t try to pick a political fight in the future around open-source.
(I agree it probably picked that fight too early.)
The reason is that there’s a non-trivial chance that alignment is plausibly solvable for human-level AI systems ala AI control, even if they are scheming, so long as the lab has control over the AIs, which as a corollary also means you can’t open-source/open-weights the model.
More prosaically, AI misuse can be a problem, and the most important point here is that open-source/open-weighting the model widens the set of people who can change the AI, which unfortunately also means that there is a larger and larger chance for misuse with more people that know how to change the AI.
So I do think there’s a non-trivial chance that AI safety eventually will have to suffer political costs to ban/severely restrict open-sourcing AI.