This overall seems like good news, with the exception of commitment 8) in the associated fact sheet:
8) Develop and deploy frontier AI systems to help address society’s greatest challenges
Companies making this commitment agree to support research and development of frontier AI systems that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats. Companies also commit to supporting initiatives that foster the education and training of students and workers to prosper from the benefits of AI, and to helping citizens understand the nature, capabilities, limitations, and impact of the technology.
This seems like a proactive commitment to build frontier models, which seems like to me the most relevant dimension (I don’t think the safety commitments will make a huge difference in whether systems of a certain capability level will actually kill everyone), and seems to actively commit these companies to develop stuff in the space.
My guess is it doesn’t make a huge difference, since the other commitments are fuzzy enough that any company that would want to slow down, would be able to slow down by saying they needed to in order to meet the other 7 commitments, but it still feels to me that the thing I most want to see are public commitments to slow down, and this does feel like a missed opportunity to do that.
frontier AI systems that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats
doesn’t necessarily mean scary general-agents. E.g. “early cancer detection and prevention” is clearly non-scary, and in DeepMind’s recent blogpost on AI for climate change mitigation examples are weather forecasting, animal behavior forecasting, and energy efficiency.
This overall seems like good news, with the exception of commitment 8) in the associated fact sheet:
This seems like a proactive commitment to build frontier models, which seems like to me the most relevant dimension (I don’t think the safety commitments will make a huge difference in whether systems of a certain capability level will actually kill everyone), and seems to actively commit these companies to develop stuff in the space.
My guess is it doesn’t make a huge difference, since the other commitments are fuzzy enough that any company that would want to slow down, would be able to slow down by saying they needed to in order to meet the other 7 commitments, but it still feels to me that the thing I most want to see are public commitments to slow down, and this does feel like a missed opportunity to do that.
I largely agree. But note that
doesn’t necessarily mean scary general-agents. E.g. “early cancer detection and prevention” is clearly non-scary, and in DeepMind’s recent blogpost on AI for climate change mitigation examples are weather forecasting, animal behavior forecasting, and energy efficiency.