I think the headline and abstract for this article are misleading. As I read these predictions, one of the main reasons that “transformative AGI” is unlikely by 2043 is because of severe catastrophes such as war, pandemics, and other causes. The bar is high, and humanity is fragile.
For example, the headline 30% chance of “derailment from wars” is the estimate of wars so severe that they set back AI progress by multiple years, from late 2030s to past 2043. For example, a nuclear exchange between USA and China. Presumably this would not set back progress on military AI, but it would derail civilian uses of AI such that “transformative AGI”, as defined, doesn’t come by 2043.
Humanity is on track to develop transformative AGI but before it does, World War III erupts, which
kills off key researchers, snarls semiconductor supply chains, and reprioritizes resources to the
war effort and to post-war rebuilding.
The authors squarely blame the development of AI technologies for much of the increased risk.
Progress in AGI will be destabilizing; therefore, conditional on successful progress in AGI, we
expect the odds of war will rise.
Conditional on being on a trajectory to transformative AGI, we forecast a 40% chance of severe
war erupting by 2042.
There’s also a 10% estimated risk of severe wars so destabilizing that they delay transformative AGI from late 2030s to after 2100, or perhaps forever. An analogy would be if WW2 was so severe that nobody was ever able to make tanks again.
Only if [wars] result in a durable inability to produce, or lack of interest in, transformative AGI will they matter on this timescale.
The discussion of regulation and pandemics are similarly terrifying. This is not a paper that should make anyone relax.
Using their “AGI Forecaster”: if there are no technical barriers, the risk of derailment makes the probability (of transformative AGI within 20 years) 37.7%; if there is no risk of derailment, the technical barriers make the probability 1.1%.
I get the same numbers on the web app, but I don’t see how it relates to my comment, can you elaborate?
If there are no technical barriers, they are estimating a 37.7% chance of transformative AGI (which they estimate is a 5 to 50% extinction risk once created) and a 62.3% chance of “derailment”. Some of the “derailments” are also extinction risks.
if there is no risk of derailment, the technical barriers make the probability 1.1%.
I don’t think we can use the paper’s probabilities this way, because technical barriers are not independent of derailments. For example, if there is no risk of severe war, then we should forecast higher production of chips and power. This means the 1.1% figure should increase.
As I read these predictions, one of the main reasons that “transformative AGI” is unlikely by 2043 is because of severe catastrophes such as war, pandemics, and other causes.
… in order to emphase that, even without catastrophe, they say the technical barriers alone make “transformative AGI in the next 20 years” only 1% likely.
I don’t think we can use the paper’s probabilities this way, because technical barriers are not independent of derailments.
I disagree. The probabilities they give regarding the technical barriers (which include economic issues of development and deployment) are meant to convey how unlikely each of the necessary technical steps is, even in a world where technological and economic development are not subjected to catastrophic disruption.
On the other hand, the probabilities associated with various catastrophic scenarios, are specifically estimates that war, pandemics, etc, occur and derail the rise of AI. The “derailment” probabilities are meant to be independent of the “technical barrier” probabilities. (@Ted Sanders should correct me if I’m wrong.)
+1. The derailment probabilities are somewhat independent of the technical barrier probabilities in that they are conditioned on the technical barriers otherwise being overcome (e.g., setting them all to 100%). That said, if you assign high probabilities to the technical barriers being overcome quickly, then the odds of derailment are probably lower, as there are fewer years for derailments to occur and derailments that cause delay by a few years may still be recovered from.
I think the headline and abstract for this article are misleading. As I read these predictions, one of the main reasons that “transformative AGI” is unlikely by 2043 is because of severe catastrophes such as war, pandemics, and other causes. The bar is high, and humanity is fragile.
For example, the headline 30% chance of “derailment from wars” is the estimate of wars so severe that they set back AI progress by multiple years, from late 2030s to past 2043. For example, a nuclear exchange between USA and China. Presumably this would not set back progress on military AI, but it would derail civilian uses of AI such that “transformative AGI”, as defined, doesn’t come by 2043.
The authors squarely blame the development of AI technologies for much of the increased risk.
There’s also a 10% estimated risk of severe wars so destabilizing that they delay transformative AGI from late 2030s to after 2100, or perhaps forever. An analogy would be if WW2 was so severe that nobody was ever able to make tanks again.
The discussion of regulation and pandemics are similarly terrifying. This is not a paper that should make anyone relax.
Using their “AGI Forecaster”: if there are no technical barriers, the risk of derailment makes the probability (of transformative AGI within 20 years) 37.7%; if there is no risk of derailment, the technical barriers make the probability 1.1%.
I get the same numbers on the web app, but I don’t see how it relates to my comment, can you elaborate?
If there are no technical barriers, they are estimating a 37.7% chance of transformative AGI (which they estimate is a 5 to 50% extinction risk once created) and a 62.3% chance of “derailment”. Some of the “derailments” are also extinction risks.
I don’t think we can use the paper’s probabilities this way, because technical barriers are not independent of derailments. For example, if there is no risk of severe war, then we should forecast higher production of chips and power. This means the 1.1% figure should increase.
Mostly I was responding to this:
… in order to emphase that, even without catastrophe, they say the technical barriers alone make “transformative AGI in the next 20 years” only 1% likely.
I disagree. The probabilities they give regarding the technical barriers (which include economic issues of development and deployment) are meant to convey how unlikely each of the necessary technical steps is, even in a world where technological and economic development are not subjected to catastrophic disruption.
On the other hand, the probabilities associated with various catastrophic scenarios, are specifically estimates that war, pandemics, etc, occur and derail the rise of AI. The “derailment” probabilities are meant to be independent of the “technical barrier” probabilities. (@Ted Sanders should correct me if I’m wrong.)
+1. The derailment probabilities are somewhat independent of the technical barrier probabilities in that they are conditioned on the technical barriers otherwise being overcome (e.g., setting them all to 100%). That said, if you assign high probabilities to the technical barriers being overcome quickly, then the odds of derailment are probably lower, as there are fewer years for derailments to occur and derailments that cause delay by a few years may still be recovered from.