These arguments prove too much; you could apply them to pretty much any technology (e.g. self-driving cars, 3D printing, reusable rockets, smart phones, VR headsets...).
I suppose my argument has an implicit, “current forecasts are not taking these arguments into account.” If people actually were taking my arguments into account, and still concluding that we should have short timelines, then this would make sense. But, I made these arguments because I haven’t seen people talk about these considerations much. For example, I deliberately avoided the argument that according to the outside view, timelines might be expected to be long, since that’s an argument I’ve already seen many people make, and therefore we can expect a lot of people to take it into account when they make forecasts.
I agree that the things you say push in the direction of longer timelines, but there are other arguments one could make that push in the direction of shorter timelines
Sure. I think my post is akin to someone arguing for a scientific theory. I’m just contributing some evidence in favor of the theory, not conducting a full analysis for and against it. Others can point to evidence against it, and overall we’ll just have to sum over all these considerations to arrive at our answer.
I definitely agree that our timelines forecasts should take into account the three phenomena you mention, and I also agree that e.g. Ajeya’s doesn’t talk about this much. I disagree that the effect size of these phenomena is enough to get us to 50 years rather than, say, +5 years to whatever our opinion sans these phenomena was. I also disagree that overall Ajeya’s model is an underestimate of timelines, because while indeed the phenomena you mention should cause us to shade timelines upward, there is a long list of other phenomena I could mention which should cause us to shade timelines downward, and it’s unclear which list is overall more powerful.
On a separate note, would you be interested in a call sometime to discuss timelines? I’d love to share my overall argument with you and hear your thoughts, and I’d love to hear your overall timelines model if you have one.
Matthew, one general comment. Most models of AI adoption once the conditions are reached are exponential. So your forecast model is flawed in this way.
AI will take over in an area once it is : (robust (mostly software robustness), and it solves the general task with few edge cases). Arguably it already has ‘taken over’ the space of board games in that if solving board games had economic value (in the way that loading a truck has economic value), all players would already be AI.
Once the conditions of (robustness, general task) is solved, or your arguments #1 and #3, there is a key fact you are missing:
Regulatory agencies and people don’t have a choice but to adopt. That is, it’s not a voluntary act. They either do it or they go broke/cease to matter. This is something I will break out just more generally: if a country has a (robust, general) AI agent that can drive cars, they can immediately save paying several million people. This means that any nation that ‘slows down’ adoption via regulation becomes uncompetitive on the global scale, and any individual firm that ‘slows down’ adoption goes broke because it’s competitors can sell services below marginal cost.
Now, today there are problems. We don’t yet have a good framework to prove robustness. It’s actually a difficult software engineering task in itself. It may ultimately turn out to be a harder problem than solving the general AI problem itself...*
Point is, you argument reduces to: “I believe it will take more than 50 years for a (robust, general) TAI to be developed where it exists in at least one place and is owned by an entity who intends to release it”
And you might be right. But it all hinges on your second argument.
*arguably the entire “alignment” problem is really a subset of “robustness”.
Regulatory agencies and people don’t have a choice but to adopt. That is, it’s not a voluntary act. They either do it or they go broke/cease to matter. This is something I will break out just more generally: if a country has a (robust, general) AI agent that can drive cars, they can immediately save paying several million people. This means that any nation that ‘slows down’ adoption via regulation becomes uncompetitive on the global scale, and any individual firm that ‘slows down’ adoption goes broke because it’s competitors can sell services below marginal cost.
This argument seems to prove too much. If regulators absolutely cannot regulate something because they will get wiped out by competitors, why does overregulation exist in any domain? Taking nuclear power as an example, it is almost certainly true that nuclear could be 10x cheaper than existing power sources with appropriate regulation, yet no country has done this.
The whole point is that regulators DO NOT respond to economic incentives because the incentives apply to those being regulated, not the regulator themselves.
Nuclear power is easily explained. It doesn’t fit the (robust, general) heuristic I mentioned above as it isn’t robust. Nor does it fit a third implied parameter, economic gain. Implicitly a robust and general AI system provides economic gain because the cost of the compute electronics and the energy to run them is far less than the cost of upkeep of a human being. (initially this would be true only in rich countries, but as the compute electronics become commodified it would soon be true almost everywhere)
Nuclear power, jetpacks, flying cars, Moon bases—most failed future predictions just fail the economic gain constraint.
Nuclear power is not 10x cheaper. It carries large risks so some regulation cannot be skipped. I concur that there is some unnecessary regulation, but the evidence such as the linked source just doesn’t leave “room” for a 10x gain. Currently the data suggests it doesn’t provide an economic gain over natural gas unless the carbon emissions are priced in, and they are not in most countries.
The other items I mention also don’t have an economic gain. Jetpacks/flying cars—trivial, the value of the saved time is less than the value of the (fuel guzzled for a VTOL, capital cost/wear and tear on a personal VTOL, and externalities like noise and crashes). For wealthy individuals where time is that valuable they do have VTOLs—helicopters—as they also value their continued existence, and a helicopter piloted by a professional is safer than a personal jetpack.
Moon base is similar, the scientific knowledge about a dead rock doesn’t really “pay rent” sufficient to justify the cost of sending humans there.
Nuclear power is not 10x cheaper. It carries large risks so some regulation cannot be skipped. I concur that there is some unnecessary regulation, but the evidence such as the linked source just doesn’t leave “room” for a 10x gain. Currently the data suggests it doesn’t provide an economic gain over natural gas unless the carbon emissions are priced in, and they are not in most countries.
I recommend reading the Roots of Progress article I linked to in the post. Most of the reason why nuclear power is high cost is because of the burdensome regulations. And of course, regulation is not uniformly bad, but it seems from the chart Devanney Figure 7.11 in the article that we could have relatively safe nuclear energy for a fraction of its current price.
Ok, I look at the chart. It seems to show that in the 1970s the cost per kw of capacity of nuclear hit a trough at about $1 a watt*. Was this corrected for inflation? And that the cost of new capacity has soured to ridiculous levels.
We still have many of those reactors built in the 1970s. They are linked in the lazard data above as ‘paid for’ reactors. They are $29 a megawatt-hour. Solar hits as low as $31 a megawatt-hour, and natural gas $28 in the same ‘paid for case’.
So it appears that no, actually, we cannot get energy for 10x lower than the current price. (I think as a rational agent you need to ‘update’ now or prove that this statement is false?)
My other point was that if other nations could do it—not all of them have the same regulatory scheme. If other nations could build reactors at a fraction of the price they would benefit. And China has a strong incentive if this were true—they have a major pollution problem with coal. But, “the industry has not broken ground on a new plant in China since late 2016”.
So this suggests that either : the inefficient and unproductive regulatory scheme you mention is so ‘viral’ that even a country that appears to be able to shove through other major changes overnight just can’t help itself but to make nuclear unproductive through too much regulation. Or in fact there isn’t the opportunity for 10x lower costs, that in fact a nuclear reactor is complicated and made of expensive parts that are made in tiny quantities and hard to reduce in cost, that after a reactor is paid for it still requires a huge crew to care and feed it, and that the radiation fields whenever there is a leak or need to work on certain areas of the plant make it far more difficult and expensive to work on, even though the risk to the general public may be very low. Oh and the government has an incentive to carefully monitor every nuclear reactor just to make sure someone isn’t making plutonium.
Back to the original topic of AI timelines: with AI systems there isn’t a 10 year investment or a need to get specialized parts only made in Japan to make an AI. It’s not just the software you will need, AI systems do need specialized compute platforms, but there are multiple vendors for these and most countries are going to be able to buy as much of them as they want. Therefore if a country can be lax in regulating AI systems and get “10x lower labor costs” by having the AI systems do labor instead of humans, they get an economic benefit. Therefore unless regulatory regimes are so “viral” they take over everywhere, in every nation, and prevent this everywhere, you see AI grow like wildfire in certain places, and everyone else will be forced to laxen their rules or be left behind.
As a simple example, if China banned near-term AI systems or regulated them too severely, but Australia allowed them, Australia could use them to mine for resources in deep mines too dangerous for humans and then build self replicating factories. In 5-10 years of exponential growth their industrial output would exceed all of China’s with 25 million people + a few million AI specialists they might need to bring in if they can’t work remotely. [due to regulations]
So either China has to allow them to ‘keep up’ or stop being a superpower.
We still have many of those reactors built in the 1970s. They are linked in the lazard data above as ‘paid for’ reactors. They are $29 a megawatt-hour. Solar hits as low as $31 a megawatt-hour, and natural gas $28 in the same ‘paid for case’.
Your claim here is that under optimal regulatory policy we could not possibly do better today than with 1970′s technology?
My other point was that if other nations could do it—not all of them have the same regulatory scheme. If other nations could build reactors at a fraction of the price they would benefit. And China has a strong incentive if this were true—they have a major pollution problem with coal. But, “the industry has not broken ground on a new plant in China since late 2016”.
from the article you linked
The 2011 meltdown at Japan’s Fukushima Daiichi plant shocked Chinese officials and made a strong impression on many Chinese citizens. A government survey in August 2017 found that only 40% of the public supported nuclear power development.
It seems perfectly reasonable to believe China too can suffer from regulatory failure due to public misconception. In fact, given it’s state-driven economy, wouldn’t we expect market forces to be even less effective at finding low-cost solutions than in Western countries? Malinvestment seems to be a hallmark of the current Chinese system.
Your claim here is that under optimal regulatory policy we could not possibly do better today than with 1970′s technology?
Yes. I do claim that. Even if the reactors were ‘free’ they are still not better than solar/wind. So if the regulatory agencies decided to raise the accepted radiation doses by many orders of magnitude, and to just stop requiring any protections at all—ok to build a nuclear reactor in a warehouse—I am saying it wouldn’t be cost effective.
If only we had a real world example of such a regime. Oh wait, we do.
I suppose my argument has an implicit, “current forecasts are not taking these arguments into account.” If people actually were taking my arguments into account, and still concluding that we should have short timelines, then this would make sense. But, I made these arguments because I haven’t seen people talk about these considerations much. For example, I deliberately avoided the argument that according to the outside view, timelines might be expected to be long, since that’s an argument I’ve already seen many people make, and therefore we can expect a lot of people to take it into account when they make forecasts.
Sure. I think my post is akin to someone arguing for a scientific theory. I’m just contributing some evidence in favor of the theory, not conducting a full analysis for and against it. Others can point to evidence against it, and overall we’ll just have to sum over all these considerations to arrive at our answer.
I definitely agree that our timelines forecasts should take into account the three phenomena you mention, and I also agree that e.g. Ajeya’s doesn’t talk about this much. I disagree that the effect size of these phenomena is enough to get us to 50 years rather than, say, +5 years to whatever our opinion sans these phenomena was. I also disagree that overall Ajeya’s model is an underestimate of timelines, because while indeed the phenomena you mention should cause us to shade timelines upward, there is a long list of other phenomena I could mention which should cause us to shade timelines downward, and it’s unclear which list is overall more powerful.
On a separate note, would you be interested in a call sometime to discuss timelines? I’d love to share my overall argument with you and hear your thoughts, and I’d love to hear your overall timelines model if you have one.
Matthew, one general comment. Most models of AI adoption once the conditions are reached are exponential. So your forecast model is flawed in this way.
AI will take over in an area once it is : (robust (mostly software robustness), and it solves the general task with few edge cases). Arguably it already has ‘taken over’ the space of board games in that if solving board games had economic value (in the way that loading a truck has economic value), all players would already be AI.
Once the conditions of (robustness, general task) is solved, or your arguments #1 and #3, there is a key fact you are missing:
Regulatory agencies and people don’t have a choice but to adopt. That is, it’s not a voluntary act. They either do it or they go broke/cease to matter. This is something I will break out just more generally: if a country has a (robust, general) AI agent that can drive cars, they can immediately save paying several million people. This means that any nation that ‘slows down’ adoption via regulation becomes uncompetitive on the global scale, and any individual firm that ‘slows down’ adoption goes broke because it’s competitors can sell services below marginal cost.
Now, today there are problems. We don’t yet have a good framework to prove robustness. It’s actually a difficult software engineering task in itself. It may ultimately turn out to be a harder problem than solving the general AI problem itself...*
Point is, you argument reduces to: “I believe it will take more than 50 years for a (robust, general) TAI to be developed where it exists in at least one place and is owned by an entity who intends to release it”
And you might be right. But it all hinges on your second argument.
*arguably the entire “alignment” problem is really a subset of “robustness”.
This argument seems to prove too much. If regulators absolutely cannot regulate something because they will get wiped out by competitors, why does overregulation exist in any domain? Taking nuclear power as an example, it is almost certainly true that nuclear could be 10x cheaper than existing power sources with appropriate regulation, yet no country has done this.
The whole point is that regulators DO NOT respond to economic incentives because the incentives apply to those being regulated, not the regulator themselves.
Nuclear power is easily explained. It doesn’t fit the (robust, general) heuristic I mentioned above as it isn’t robust. Nor does it fit a third implied parameter, economic gain. Implicitly a robust and general AI system provides economic gain because the cost of the compute electronics and the energy to run them is far less than the cost of upkeep of a human being. (initially this would be true only in rich countries, but as the compute electronics become commodified it would soon be true almost everywhere)
Nuclear power, jetpacks, flying cars, Moon bases—most failed future predictions just fail the economic gain constraint.
Nuclear power is not 10x cheaper. It carries large risks so some regulation cannot be skipped. I concur that there is some unnecessary regulation, but the evidence such as the linked source just doesn’t leave “room” for a 10x gain. Currently the data suggests it doesn’t provide an economic gain over natural gas unless the carbon emissions are priced in, and they are not in most countries.
The other items I mention also don’t have an economic gain. Jetpacks/flying cars—trivial, the value of the saved time is less than the value of the (fuel guzzled for a VTOL, capital cost/wear and tear on a personal VTOL, and externalities like noise and crashes). For wealthy individuals where time is that valuable they do have VTOLs—helicopters—as they also value their continued existence, and a helicopter piloted by a professional is safer than a personal jetpack.
Moon base is similar, the scientific knowledge about a dead rock doesn’t really “pay rent” sufficient to justify the cost of sending humans there.
I recommend reading the Roots of Progress article I linked to in the post. Most of the reason why nuclear power is high cost is because of the burdensome regulations. And of course, regulation is not uniformly bad, but it seems from the chart Devanney Figure 7.11 in the article that we could have relatively safe nuclear energy for a fraction of its current price.
Ok, I look at the chart. It seems to show that in the 1970s the cost per kw of capacity of nuclear hit a trough at about $1 a watt*. Was this corrected for inflation? And that the cost of new capacity has soured to ridiculous levels.
We still have many of those reactors built in the 1970s. They are linked in the lazard data above as ‘paid for’ reactors. They are $29 a megawatt-hour. Solar hits as low as $31 a megawatt-hour, and natural gas $28 in the same ‘paid for case’.
So it appears that no, actually, we cannot get energy for 10x lower than the current price. (I think as a rational agent you need to ‘update’ now or prove that this statement is false?)
My other point was that if other nations could do it—not all of them have the same regulatory scheme. If other nations could build reactors at a fraction of the price they would benefit. And China has a strong incentive if this were true—they have a major pollution problem with coal. But, “the industry has not broken ground on a new plant in China since late 2016”.
So this suggests that either : the inefficient and unproductive regulatory scheme you mention is so ‘viral’ that even a country that appears to be able to shove through other major changes overnight just can’t help itself but to make nuclear unproductive through too much regulation. Or in fact there isn’t the opportunity for 10x lower costs, that in fact a nuclear reactor is complicated and made of expensive parts that are made in tiny quantities and hard to reduce in cost, that after a reactor is paid for it still requires a huge crew to care and feed it, and that the radiation fields whenever there is a leak or need to work on certain areas of the plant make it far more difficult and expensive to work on, even though the risk to the general public may be very low. Oh and the government has an incentive to carefully monitor every nuclear reactor just to make sure someone isn’t making plutonium.
Back to the original topic of AI timelines: with AI systems there isn’t a 10 year investment or a need to get specialized parts only made in Japan to make an AI. It’s not just the software you will need, AI systems do need specialized compute platforms, but there are multiple vendors for these and most countries are going to be able to buy as much of them as they want. Therefore if a country can be lax in regulating AI systems and get “10x lower labor costs” by having the AI systems do labor instead of humans, they get an economic benefit. Therefore unless regulatory regimes are so “viral” they take over everywhere, in every nation, and prevent this everywhere, you see AI grow like wildfire in certain places, and everyone else will be forced to laxen their rules or be left behind.
As a simple example, if China banned near-term AI systems or regulated them too severely, but Australia allowed them, Australia could use them to mine for resources in deep mines too dangerous for humans and then build self replicating factories. In 5-10 years of exponential growth their industrial output would exceed all of China’s with 25 million people + a few million AI specialists they might need to bring in if they can’t work remotely. [due to regulations]
So either China has to allow them to ‘keep up’ or stop being a superpower.
Your claim here is that under optimal regulatory policy we could not possibly do better today than with 1970′s technology?
from the article you linked
It seems perfectly reasonable to believe China too can suffer from regulatory failure due to public misconception. In fact, given it’s state-driven economy, wouldn’t we expect market forces to be even less effective at finding low-cost solutions than in Western countries? Malinvestment seems to be a hallmark of the current Chinese system.
Your claim here is that under optimal regulatory policy we could not possibly do better today than with 1970′s technology?
Yes. I do claim that. Even if the reactors were ‘free’ they are still not better than solar/wind. So if the regulatory agencies decided to raise the accepted radiation doses by many orders of magnitude, and to just stop requiring any protections at all—ok to build a nuclear reactor in a warehouse—I am saying it wouldn’t be cost effective.
If only we had a real world example of such a regime. Oh wait, we do.