Given that a high stakes all out arms race for frontier foundation AGI models is heating up between the major powers, and meta’s public models are trailing—it doesn’t seem clear at all that open sourcing them is net safety negative. One could argue the benefits of having wide access for safety research along with tilting the world towards multi-polar scenarios outweight the (more minimal) risks.
I agree it is not clear if it is net postive or negative that they open source the models, here are the main arguments for and against I could think of:
Pros with open sourcing models
- Gives AI alignment researchers access to smarter models to experiment on
- Decreases income for leading AI labs such as OpenAI and Google, since people can use open source models instead.
Cons with open sourcing models
- Capability researchers can do better experiements on how to improve capabilities
- The open source community could develop code to faster train and run inference on models, indirectly enhancing capability development.
- Better open source models could lead to more AI startups succeeding, which might lead to more AI research funding. This seems like a stretch to me.
- If Meta would share any meaningful improvements on how to train models that is of course directly contributing to other labs capabilities, but llama to me doesn’t seem that innovative. I’m happy to be corrected if I am wrong on this point.
Given that a high stakes all out arms race for frontier foundation AGI models is heating up between the major powers, and meta’s public models are trailing—it doesn’t seem clear at all that open sourcing them is net safety negative. One could argue the benefits of having wide access for safety research along with tilting the world towards multi-polar scenarios outweight the (more minimal) risks.
I agree it is not clear if it is net postive or negative that they open source the models, here are the main arguments for and against I could think of:
Pros with open sourcing models
- Gives AI alignment researchers access to smarter models to experiment on
- Decreases income for leading AI labs such as OpenAI and Google, since people can use open source models instead.
Cons with open sourcing models
- Capability researchers can do better experiements on how to improve capabilities
- The open source community could develop code to faster train and run inference on models, indirectly enhancing capability development.
- Better open source models could lead to more AI startups succeeding, which might lead to more AI research funding. This seems like a stretch to me.
- If Meta would share any meaningful improvements on how to train models that is of course directly contributing to other labs capabilities, but llama to me doesn’t seem that innovative. I’m happy to be corrected if I am wrong on this point.
^ all good points, but I think the biggest thing here is the policy of sharing weights continuing into the future with more powerful models.