From my perspective, it’s more like the opposite; if alignment were to be solved tomorrow, that would give the AI policy people a fair shot at getting it implemented.
I’m unsure what the government can do that DeepMind or OpenAI (or someone else) couldn’t do in their own. Maybe you’re imagining a policy that forces all companies to building aligned AI’s according to the solution, but this won’t be perfect and an unaligned AGI could still kill everyone (or it could be built somewhere else)
The first thing you do with a solution to alignment is build an aligned AGI to prevent all x-risks. I don’t see routing through the government helps that process(?)
I’m unsure what the government can do that DeepMind or OpenAI (or someone else) couldn’t do in their own. Maybe you’re imagining a policy that forces all companies to building aligned AI’s according to the solution, but this won’t be perfect and an unaligned AGI could still kill everyone (or it could be built somewhere else)
The first thing you do with a solution to alignment is build an aligned AGI to prevent all x-risks. I don’t see routing through the government helps that process(?)