I think it’s reasonable to believe that the US government takes AI issues more seriously than COVID—for example, it’s seen as more of a national security issue (esp wrt China), and it’s less politicized.
I’m not sure that’s helpful from a safety perspective. Is it really helpful if the US unleashes the unfriendly self-improving monster first, in an effort to “beat” China?
From my reading and listening on the topic, the US government does not take AI safety seriously, when “safety” is defined in the way that we define it here on LessWrong. Their concerns around AI safety have more to do with things like ensuring that datasets aren’t biased so that the AI doesn’t produce accidentally racist outcomes. But thinking about AI safety to ensure that a recursively self-improving optimizer doesn’t annihilate humanity on its way to some inscrutable goal? I don’t think that’s a big focus of the US government. If anything, that outcome is seen as an acceptable risk for the US remaining ahead of China in some kind of imagined AI arms race.
I’m not sure that’s helpful from a safety perspective. Is it really helpful if the US unleashes the unfriendly self-improving monster first, in an effort to “beat” China?
From my reading and listening on the topic, the US government does not take AI safety seriously, when “safety” is defined in the way that we define it here on LessWrong. Their concerns around AI safety have more to do with things like ensuring that datasets aren’t biased so that the AI doesn’t produce accidentally racist outcomes. But thinking about AI safety to ensure that a recursively self-improving optimizer doesn’t annihilate humanity on its way to some inscrutable goal? I don’t think that’s a big focus of the US government. If anything, that outcome is seen as an acceptable risk for the US remaining ahead of China in some kind of imagined AI arms race.