As Katja points out in the OP: I would like to see the AI industry solve all disease, create novel art forms, and take over the world. I would like it to happen in a safe way that does not literally kill everyone. This is not the same as being in a zero-sum game with the industry.
I agree on this of course. But the issue is that powerful people can jump to conclusions on AI safety in 6 hour timelines, whereas the AI industry converging on understanding alignment is more like 6 year timeline. If AI safety is the #1 public opinion threat to the AI industry at any given time, or appears that way, then then that could result in AI safety being marginalized for decades.
This system revolves around a very diverse mix of reasonable and unreasonable people.. What I’m getting at is that it’s a very delicate game, and there’s no way to approach “slowing down AI”, trying to impeding the government and military’s top R&D priorities is basically hitting the problem with a sledgehammer. And it can hit back, orders of magnitude harder.
I agree on this of course. But the issue is that powerful people can jump to conclusions on AI safety in 6 hour timelines, whereas the AI industry converging on understanding alignment is more like 6 year timeline. If AI safety is the #1 public opinion threat to the AI industry at any given time, or appears that way, then then that could result in AI safety being marginalized for decades.
This system revolves around a very diverse mix of reasonable and unreasonable people.. What I’m getting at is that it’s a very delicate game, and there’s no way to approach “slowing down AI”, trying to impeding the government and military’s top R&D priorities is basically hitting the problem with a sledgehammer. And it can hit back, orders of magnitude harder.
I didn’t realize the US military was secretly authoring all of the actually important R&D happening at DeepMind and OpenAI?