Thanks for the post—I think there are some ways heavy regulation of AI could be very counterproductive or ineffective for safety:
If AI progress slows down enough in countries were safety-concerned people are especially influential, then these countries (and their companies) will fall behind in international AI development. This would eliminate much/most of safety-concerned people’s opportunities for impacting how AI goes.
If China “catches up” to the US in AI (due to US over-regulation) when AI is looking increasingly economically and militarily important, that could easily motivate US lawmakers to hit the gas on AI (which would at least undo some of the earlier slowing down of AI, and would at worst spark an international race to the bottom on AI).
Also, you mention,
The community strategy (insofar as there even is one) is to bet everything on getting a couple of technical alignment folks onto the team at top research labs in the hopes that they will miraculously solve alignment before the mad scientists in the office next door turn on the doomsday machine.
From conversation, my understanding is some governance/policy folks fortunately have (somewhat) more promising ideas than that. (This doesn’t show up much on this site, partly because: people on here tend to not be as interested in governance, these professionals tend to be busy, the ideas are fairly rough, and getting the optics right can be more important for governance ideas.) I hear there’s some work aimed at posting about some of these ideas—until then, chatting with people (e.g., by reaching out to people at conferences) might be the best way to learn about these ideas.
Thanks for the post—I think there are some ways heavy regulation of AI could be very counterproductive or ineffective for safety:
If AI progress slows down enough in countries were safety-concerned people are especially influential, then these countries (and their companies) will fall behind in international AI development. This would eliminate much/most of safety-concerned people’s opportunities for impacting how AI goes.
If China “catches up” to the US in AI (due to US over-regulation) when AI is looking increasingly economically and militarily important, that could easily motivate US lawmakers to hit the gas on AI (which would at least undo some of the earlier slowing down of AI, and would at worst spark an international race to the bottom on AI).
Also, you mention,
From conversation, my understanding is some governance/policy folks fortunately have (somewhat) more promising ideas than that. (This doesn’t show up much on this site, partly because: people on here tend to not be as interested in governance, these professionals tend to be busy, the ideas are fairly rough, and getting the optics right can be more important for governance ideas.) I hear there’s some work aimed at posting about some of these ideas—until then, chatting with people (e.g., by reaching out to people at conferences) might be the best way to learn about these ideas.