Using their “AGI Forecaster”: if there are no technical barriers, the risk of derailment makes the probability (of transformative AGI within 20 years) 37.7%; if there is no risk of derailment, the technical barriers make the probability 1.1%.
I get the same numbers on the web app, but I don’t see how it relates to my comment, can you elaborate?
If there are no technical barriers, they are estimating a 37.7% chance of transformative AGI (which they estimate is a 5 to 50% extinction risk once created) and a 62.3% chance of “derailment”. Some of the “derailments” are also extinction risks.
if there is no risk of derailment, the technical barriers make the probability 1.1%.
I don’t think we can use the paper’s probabilities this way, because technical barriers are not independent of derailments. For example, if there is no risk of severe war, then we should forecast higher production of chips and power. This means the 1.1% figure should increase.
As I read these predictions, one of the main reasons that “transformative AGI” is unlikely by 2043 is because of severe catastrophes such as war, pandemics, and other causes.
… in order to emphase that, even without catastrophe, they say the technical barriers alone make “transformative AGI in the next 20 years” only 1% likely.
I don’t think we can use the paper’s probabilities this way, because technical barriers are not independent of derailments.
I disagree. The probabilities they give regarding the technical barriers (which include economic issues of development and deployment) are meant to convey how unlikely each of the necessary technical steps is, even in a world where technological and economic development are not subjected to catastrophic disruption.
On the other hand, the probabilities associated with various catastrophic scenarios, are specifically estimates that war, pandemics, etc, occur and derail the rise of AI. The “derailment” probabilities are meant to be independent of the “technical barrier” probabilities. (@Ted Sanders should correct me if I’m wrong.)
+1. The derailment probabilities are somewhat independent of the technical barrier probabilities in that they are conditioned on the technical barriers otherwise being overcome (e.g., setting them all to 100%). That said, if you assign high probabilities to the technical barriers being overcome quickly, then the odds of derailment are probably lower, as there are fewer years for derailments to occur and derailments that cause delay by a few years may still be recovered from.
Using their “AGI Forecaster”: if there are no technical barriers, the risk of derailment makes the probability (of transformative AGI within 20 years) 37.7%; if there is no risk of derailment, the technical barriers make the probability 1.1%.
I get the same numbers on the web app, but I don’t see how it relates to my comment, can you elaborate?
If there are no technical barriers, they are estimating a 37.7% chance of transformative AGI (which they estimate is a 5 to 50% extinction risk once created) and a 62.3% chance of “derailment”. Some of the “derailments” are also extinction risks.
I don’t think we can use the paper’s probabilities this way, because technical barriers are not independent of derailments. For example, if there is no risk of severe war, then we should forecast higher production of chips and power. This means the 1.1% figure should increase.
Mostly I was responding to this:
… in order to emphase that, even without catastrophe, they say the technical barriers alone make “transformative AGI in the next 20 years” only 1% likely.
I disagree. The probabilities they give regarding the technical barriers (which include economic issues of development and deployment) are meant to convey how unlikely each of the necessary technical steps is, even in a world where technological and economic development are not subjected to catastrophic disruption.
On the other hand, the probabilities associated with various catastrophic scenarios, are specifically estimates that war, pandemics, etc, occur and derail the rise of AI. The “derailment” probabilities are meant to be independent of the “technical barrier” probabilities. (@Ted Sanders should correct me if I’m wrong.)
+1. The derailment probabilities are somewhat independent of the technical barrier probabilities in that they are conditioned on the technical barriers otherwise being overcome (e.g., setting them all to 100%). That said, if you assign high probabilities to the technical barriers being overcome quickly, then the odds of derailment are probably lower, as there are fewer years for derailments to occur and derailments that cause delay by a few years may still be recovered from.