There are a lot of really good reasons why someone would avoid touching the concept of “suppressing AI research” with a ten-foot pole; depending on who they are and where they work, it’s tantamount to advocating for treason.
I agree this is an ongoing dynamic, and I’m glad you brought it up, but I have to disagree with “good reasons”. Something being suppressed by the state does not make it false. If anything it is good reason to believe it might be true.
something as radical as playing a zero-sum game against the entire AI industry
As Katja points out in the OP: I would like to see the AI industry solve all disease, create novel art forms, and take over the world. I would like it to happen in a safe way that does not literally kill everyone. This is not the same as being in a zero-sum game with the industry.
As Katja points out in the OP: I would like to see the AI industry solve all disease, create novel art forms, and take over the world. I would like it to happen in a safe way that does not literally kill everyone. This is not the same as being in a zero-sum game with the industry.
I agree on this of course. But the issue is that powerful people can jump to conclusions on AI safety in 6 hour timelines, whereas the AI industry converging on understanding alignment is more like 6 year timeline. If AI safety is the #1 public opinion threat to the AI industry at any given time, or appears that way, then then that could result in AI safety being marginalized for decades.
This system revolves around a very diverse mix of reasonable and unreasonable people.. What I’m getting at is that it’s a very delicate game, and there’s no way to approach “slowing down AI”, trying to impeding the government and military’s top R&D priorities is basically hitting the problem with a sledgehammer. And it can hit back, orders of magnitude harder.
I agree this is an ongoing dynamic, and I’m glad you brought it up, but I have to disagree with “good reasons”. Something being suppressed by the state does not make it false. If anything it is good reason to believe it might be true.
As Katja points out in the OP: I would like to see the AI industry solve all disease, create novel art forms, and take over the world. I would like it to happen in a safe way that does not literally kill everyone. This is not the same as being in a zero-sum game with the industry.
I agree on this of course. But the issue is that powerful people can jump to conclusions on AI safety in 6 hour timelines, whereas the AI industry converging on understanding alignment is more like 6 year timeline. If AI safety is the #1 public opinion threat to the AI industry at any given time, or appears that way, then then that could result in AI safety being marginalized for decades.
This system revolves around a very diverse mix of reasonable and unreasonable people.. What I’m getting at is that it’s a very delicate game, and there’s no way to approach “slowing down AI”, trying to impeding the government and military’s top R&D priorities is basically hitting the problem with a sledgehammer. And it can hit back, orders of magnitude harder.
I didn’t realize the US military was secretly authoring all of the actually important R&D happening at DeepMind and OpenAI?