So, this isn’t exactly a critique of your work, but more of a meta-note about the choice of focus and the assumptions that that implies. You have chosen to focus on leading AI companies, and their behaviors. This implies the assumption that they are the only relevant force, and will remain so. I just want to point out that this isn’t my view.
I think the open source community is also a substantial source of risk, and will grow in risk faster than any responsible company is likely to. Indeed, a major risk from a company (e.g. Meta) is in how much it contributes to progress in the open source community.
Why? Because misuse risk is a huge deal, like using AI to help create bioweapons. This is already a civilizational scale risk and is getting rapidly worse. Eventually, we will also get AGI misalignment risks in broader and broader sets of actors. If all the major companies behave in personally responsible ways, but don’t contribute to reducing the rate that the open source community is catching up, and if the government doesn’t figure out a way to regulate this… then we should expect to see all the same harms that you might predict an incautious AI lab could lead to. Just a few years later.
So, this isn’t exactly a critique of your work, but more of a meta-note about the choice of focus and the assumptions that that implies. You have chosen to focus on leading AI companies, and their behaviors. This implies the assumption that they are the only relevant force, and will remain so. I just want to point out that this isn’t my view.
I think the open source community is also a substantial source of risk, and will grow in risk faster than any responsible company is likely to. Indeed, a major risk from a company (e.g. Meta) is in how much it contributes to progress in the open source community.
Why? Because misuse risk is a huge deal, like using AI to help create bioweapons. This is already a civilizational scale risk and is getting rapidly worse. Eventually, we will also get AGI misalignment risks in broader and broader sets of actors. If all the major companies behave in personally responsible ways, but don’t contribute to reducing the rate that the open source community is catching up, and if the government doesn’t figure out a way to regulate this… then we should expect to see all the same harms that you might predict an incautious AI lab could lead to. Just a few years later.
For more of my thoughts on this: https://www.lesswrong.com/posts/oJQnRDbgSS8i6DwNu/the-hopium-wars-the-agi-entente-delusion?commentId=8GSmaSiePJusFptLB