Policy makers do not know this. They know that someone is telling them this. They definitely do not know that they will get the economic promises of AGI on the timescales they care about, if they support this particular project.
I feel differently here. It seems that a lot of governments have woken up to AI in the past few years, and are putting it at the forefront of national strategies, e.g. see the headline here. In the past year there has been a lot of movement in the regulatory space, but I’m still getting undertones of ‘we realise that AI is going to be huge, and we want to establish global leadership in this technology’.
So going back to your nuclear example, I think the relevant question is: ‘What allowed policymakers to gain the necessary support to push stringent nuclear regulation through, even though it offered huge economic benefits?’. I think there are two things:
It takes a significant amount of time, ~6-8 years, for a nuclear powerplant to be built and begin operating (and even longer for it to breakeven). So whilst they are economically practical in the long-term, it can be hard to garner the support for the huge initial investment. To make this clearer, imagine if it took ~1 year to build a nuclear power plant, and 2 years for it to breakeven. If that were the case, I think it would have been harder to push stringent regulation through.
There was a lot of irrational public fear about anything nuclear, due to powerplant accidents, the Cold War and memories of nuclear weapon use during WWII.
With respect to AI, I don’t think (1) holds. That is, the economic benefits of AI will be far easier to realise than that for nuclear (you can train and deploy an AI system within a year, and likely breakeven a few years after that), meaning that policymaker support for regulation will be harder.
(2) might hold, this really depends on the nature of AI accidents over the next few years, and their impacts on public perception. I’d be interested in your thoughts here.
It seems to me that governments now believe that AI will be significant, but not extremely advantageous.
I don’t think that many policy makers believe that AI could cause GDP growth of 20+% within 10 years. Maybe they think that powerful AI would add 1% to GDP growth rates, which is definitely worth caring about. It wouldn’t be enough for any country which developed it to become the most powerful country in the world within a few decades, and would be an incentive in line with some other technologies that have been rejected.
The UK has AI as one of their “priority areas of focus”, along with quantum technologies, engineering biology, semiconductors and future telecoms in their International Technology Strategy. In the UK’s overall strategy document, ‘AI’ is mentioned 15 times, compared to ‘cyber’ (45 times), ‘nuclear’ (43), ‘energy’ (37), ‘climate’ (30), ‘space’ (17), ‘health’ (15), ‘food’ (8), ‘quantum’ (7), ‘green’ (6), and ‘biology’ (5). AI is becoming part of countries’ strategies, but I don’t think it’s at the forefront. The UK government is more involved in AI policy than most governments.
I feel differently here. It seems that a lot of governments have woken up to AI in the past few years, and are putting it at the forefront of national strategies, e.g. see the headline here. In the past year there has been a lot of movement in the regulatory space, but I’m still getting undertones of ‘we realise that AI is going to be huge, and we want to establish global leadership in this technology’.
So going back to your nuclear example, I think the relevant question is: ‘What allowed policymakers to gain the necessary support to push stringent nuclear regulation through, even though it offered huge economic benefits?’. I think there are two things:
It takes a significant amount of time, ~6-8 years, for a nuclear powerplant to be built and begin operating (and even longer for it to breakeven). So whilst they are economically practical in the long-term, it can be hard to garner the support for the huge initial investment. To make this clearer, imagine if it took ~1 year to build a nuclear power plant, and 2 years for it to breakeven. If that were the case, I think it would have been harder to push stringent regulation through.
There was a lot of irrational public fear about anything nuclear, due to powerplant accidents, the Cold War and memories of nuclear weapon use during WWII.
With respect to AI, I don’t think (1) holds. That is, the economic benefits of AI will be far easier to realise than that for nuclear (you can train and deploy an AI system within a year, and likely breakeven a few years after that), meaning that policymaker support for regulation will be harder.
(2) might hold, this really depends on the nature of AI accidents over the next few years, and their impacts on public perception. I’d be interested in your thoughts here.
It seems to me that governments now believe that AI will be significant, but not extremely advantageous.
I don’t think that many policy makers believe that AI could cause GDP growth of 20+% within 10 years. Maybe they think that powerful AI would add 1% to GDP growth rates, which is definitely worth caring about. It wouldn’t be enough for any country which developed it to become the most powerful country in the world within a few decades, and would be an incentive in line with some other technologies that have been rejected.
The UK has AI as one of their “priority areas of focus”, along with quantum technologies, engineering biology, semiconductors and future telecoms in their International Technology Strategy. In the UK’s overall strategy document, ‘AI’ is mentioned 15 times, compared to ‘cyber’ (45 times), ‘nuclear’ (43), ‘energy’ (37), ‘climate’ (30), ‘space’ (17), ‘health’ (15), ‘food’ (8), ‘quantum’ (7), ‘green’ (6), and ‘biology’ (5). AI is becoming part of countries’ strategies, but I don’t think it’s at the forefront. The UK government is more involved in AI policy than most governments.