Although it explicitly mentions the “long term risk of non-aligned Artificial General Intelligence”, the recommended specific actions are even more vague and non-binding (“assess risks”, “work to understand” etc).
The fact that the governments of major countries are starting to address AI X-risk—is both joyous and frightening:
at least something is being done at the national level to address the risk, which might be better than nothing
if even the сomatose behemoth of the gov has noticed the risk, then AGI is indeed much closer than most people think
If even the comatose behemoth of the gov has noticed the risk, then AGI is indeed much closer than most people think.
Reasoning doesn’t work like that. The information flow is almost entirely from the subtle hints in reality, to people like MIRI, and then to the government. Maybe update on gov’s being slightly less comatose, or MIRI having a really good PR team.
Once we make the assumption that governments are less on the ball than MIRI, and see what MIRI says, the governments actions tell us almost nothing about AI.
It’s disappointing because China’s high degree of centralization and disregard for privacy, despite all its drawbacks, would at least offer some major advantages in combating AI risk. But from the wording of this document I don’t get the sense that China is seriously considering AI risk as a threat to its national security.
A serious attempt would look more like “put in place a review structure that identifies and freezes all AI research publications which have potentially serious implications with regard to AI risk and turn it into a state secret if necessary”.
The fact that the governments of major countries are starting to address AI X-risk—is both joyous and frightening
As far as I can tell, this is simply not true—this is not what it looks like for a government to be genuinely concerned with a problem, even if it’s just a small bit of concern. This is not how things in China gets done. If you’ve delved into Chinese bureaucratic texts before, this is what their version of politically correct, hollow fluff piece looks like.
This is not-even-wrong-level hollow, equivalent to “do good things, don’t do bad things”.
I agree, it could’ve been much better. But AFAIK it’s the least hollow governmental AI X-risk policy so far.
I would classify the British National AI Strategy as the second best.
Although it explicitly mentions the “long term risk of non-aligned Artificial General Intelligence”, the recommended specific actions are even more vague and non-binding (“assess risks”, “work to understand” etc).
The fact that the governments of major countries are starting to address AI X-risk—is both joyous and frightening:
at least something is being done at the national level to address the risk, which might be better than nothing
if even the сomatose behemoth of the gov has noticed the risk, then AGI is indeed much closer than most people think
Reasoning doesn’t work like that. The information flow is almost entirely from the subtle hints in reality, to people like MIRI, and then to the government. Maybe update on gov’s being slightly less comatose, or MIRI having a really good PR team.
Once we make the assumption that governments are less on the ball than MIRI, and see what MIRI says, the governments actions tell us almost nothing about AI.
It’s disappointing because China’s high degree of centralization and disregard for privacy, despite all its drawbacks, would at least offer some major advantages in combating AI risk. But from the wording of this document I don’t get the sense that China is seriously considering AI risk as a threat to its national security.
A serious attempt would look more like “put in place a review structure that identifies and freezes all AI research publications which have potentially serious implications with regard to AI risk and turn it into a state secret if necessary”.
As far as I can tell, this is simply not true—this is not what it looks like for a government to be genuinely concerned with a problem, even if it’s just a small bit of concern. This is not how things in China gets done. If you’ve delved into Chinese bureaucratic texts before, this is what their version of politically correct, hollow fluff piece looks like.