And furthermore, the tech companies have recently decided that they don’t know how to control the AI and begged the US government to take control. Rationalists don’t really expect the government to do well with this on its own, but I think there is hope that AI safety researchers could produce some tools that the US government could make mandates about using? But that would also mean that “doomers” are giving the US government a lot of power?
I think you mischaracterise what’s happening a lot. Tech companies don’t “beg US government to control AI”, they beg it to regulate the industry. That’s a very big difference. They didn’t yet say anything about the control of AI, once it’s build, and (presumably) “aligned”, apart from well-sounding for PR phrases like “we will increasingly involve input from more people about where do we take AI”. In reality, they just don’t know yet who and how should “control” AI (if anyone), and align with whose “values”. They hope that the “alignment MVP”, a.k.a. AGI which is an aligned AI safety and alignment scientist, will actually tell them the “right” answers to these questions (as per OpenAI’s “superalignment” agenda).
I think you mischaracterise what’s happening a lot. Tech companies don’t “beg US government to control AI”, they beg it to regulate the industry. That’s a very big difference. They didn’t yet say anything about the control of AI, once it’s build, and (presumably) “aligned”, apart from well-sounding for PR phrases like “we will increasingly involve input from more people about where do we take AI”. In reality, they just don’t know yet who and how should “control” AI (if anyone), and align with whose “values”. They hope that the “alignment MVP”, a.k.a. AGI which is an aligned AI safety and alignment scientist, will actually tell them the “right” answers to these questions (as per OpenAI’s “superalignment” agenda).