I agree that reference class forecasting is reasonable here. I disagree that you can get anything like the 99.999% probability you claim from applying reference class forecasting to AI projects. Since rare events happen, well, rarely, it would take an exceedingly large data-set before an “outside view” or frequency-based analysis would imply that our actual expected rate should be placed as low as your stated 0.001%. (If I flip a coin with unknown weighting 20 times, and get no heads, I should conclude that heads are probably rare, but my notion of “rare” here should be on the order of 1 in 20, not of 1 in 100,000.)
With more precision: let’s say that there’s a “true probability”, p, that any given project’s “AI will be created by us” claim is correct. And let’s model p as being identical for all projects and times. Then, if we assume a uniform prior over p, and if n AI projects that have been tried to date have failed to deliver, we should assign a probability of ((1+n)/n+2) to the chance that the next project from which AI is forecast will also fail to deliver. (You can work this out by an integral, or just plug into Laplace’s rule of succession).
If people have been forecasting AI since about 1950, and if the rate of forecasts or AI projects per decade has been more or less unchanged, the above reference class forecasting model leaves us with something like a 1/[number of decades since 1950 + 2] = 1⁄8 probability of some “our project will make AI” forecast being correct in the next decade.
I agree that reference class forecasting is reasonable here. I disagree that you can get anything like the 99.999% probability you claim from applying reference class forecasting to AI projects. Since rare events happen, well, rarely, it would take an exceedingly large data-set before an “outside view” or frequency-based analysis would imply that our actual expected rate should be placed as low as your stated 0.001%. (If I flip a coin with unknown weighting 20 times, and get no heads, I should conclude that heads are probably rare, but my notion of “rare” here should be on the order of 1 in 20, not of 1 in 100,000.)
With more precision: let’s say that there’s a “true probability”, p, that any given project’s “AI will be created by us” claim is correct. And let’s model p as being identical for all projects and times. Then, if we assume a uniform prior over p, and if n AI projects that have been tried to date have failed to deliver, we should assign a probability of ((1+n)/n+2) to the chance that the next project from which AI is forecast will also fail to deliver. (You can work this out by an integral, or just plug into Laplace’s rule of succession).
If people have been forecasting AI since about 1950, and if the rate of forecasts or AI projects per decade has been more or less unchanged, the above reference class forecasting model leaves us with something like a 1/[number of decades since 1950 + 2] = 1⁄8 probability of some “our project will make AI” forecast being correct in the next decade.