If the answer were obvious, a lot of other people would already be doing it. Your situation isn’t all that unique. (Congrats, tho.)
Probably the best thing you can do is induce awareness of the issues to your followers.
But beware of making things worse instead of better—not everyone agrees with me on this, but I think ham-handed regulation (state-driven regulation is almost always ham-handed) or fearmongering could induce reactions that drive leading-edge AI research underground or into military environments, where the necessary care and caution in development may be less than in relatively open organizations. Esp. orgs with reputations to lose.
The only things now incentivizing AI development in (existentially) safe ways are the scruples and awareness of those doing the work, and relatively public scrutiny of what they’re doing. That may be insufficient in the end, but it is better than if the work were driven to less scrupulous people working underground or in national-security-supremacy environments.
Have you elaborated this argument? I tend to think a military project would be a lot more cautious than move-fast-and-break-things silicone valley businesses.
The argument that orgs with reputations to lose might start being careful when AI becomes actually dangerous or even just autonomous enough to be alarming is important if true. Most folks seem to assume they’ll just forge ahead until they succeed and let a misaligned AGI get loose.
I’ve made an argument that orgs will be careful to protect their reputations in System 2 Alignment. I think this will be helpful for alignment but not enough.
Government involvement early might also reduce proliferation, which could be crucial.
If the answer were obvious, a lot of other people would already be doing it. Your situation isn’t all that unique. (Congrats, tho.)
Probably the best thing you can do is induce awareness of the issues to your followers.
But beware of making things worse instead of better—not everyone agrees with me on this, but I think ham-handed regulation (state-driven regulation is almost always ham-handed) or fearmongering could induce reactions that drive leading-edge AI research underground or into military environments, where the necessary care and caution in development may be less than in relatively open organizations. Esp. orgs with reputations to lose.
The only things now incentivizing AI development in (existentially) safe ways are the scruples and awareness of those doing the work, and relatively public scrutiny of what they’re doing. That may be insufficient in the end, but it is better than if the work were driven to less scrupulous people working underground or in national-security-supremacy environments.
Have you elaborated this argument? I tend to think a military project would be a lot more cautious than move-fast-and-break-things silicone valley businesses.
The argument that orgs with reputations to lose might start being careful when AI becomes actually dangerous or even just autonomous enough to be alarming is important if true. Most folks seem to assume they’ll just forge ahead until they succeed and let a misaligned AGI get loose.
I’ve made an argument that orgs will be careful to protect their reputations in System 2 Alignment. I think this will be helpful for alignment but not enough.
Government involvement early might also reduce proliferation, which could be crucial.
It’s complex. Whether governments will control AGI is important and neglected.
Advancing this discussion seems important.