Then, as a necessary condition (leaving other risks from the discussion for the moment), you either don’t believe in the feasibility of AGI, or you believe in the objective morality, which any AGI will “discover”. Which one is that?
I don’t believe in feasibility of any scenario like AGI foom.
First, I fail to see how anybody taking an outside view on AI research—which is a clear instance of class of sciences with extraordinary claims and very long history of failure to deliver in spite of unusually adequate funding—can think otherwise—to me it all seems like extreme case of insider bias to assign non-negligible probabilities to scenarios like that. Virtually none sciences with this characteristics delivered what they promised (even if they delivered something useful and vaguely related).
Even if AGI happens, it is extraordinarily unlikely it will be any kind of foom, again based on outside view argument that virtually none of disruptive technologies were ever foom-like.
Both extraordinarily unlikely events would have to occur before we would be exposed to risk of AGI-caused destruction of humanity, which even in this case is far from certain.
It’s not reverse stupidity—it’s “reference class forecasting”, which is a more specific instance of our generic “outside view” concept. I gather data about AI research as an instance, look at other cases with similar characteristics (hyped overpromised and underdelivered over a very long time span) and estimate based on that. It is proven to work better than inside view of estimating based on details of a particular case.
I agree that reference class forecasting is reasonable here. I disagree that you can get anything like the 99.999% probability you claim from applying reference class forecasting to AI projects. Since rare events happen, well, rarely, it would take an exceedingly large data-set before an “outside view” or frequency-based analysis would imply that our actual expected rate should be placed as low as your stated 0.001%. (If I flip a coin with unknown weighting 20 times, and get no heads, I should conclude that heads are probably rare, but my notion of “rare” here should be on the order of 1 in 20, not of 1 in 100,000.)
With more precision: let’s say that there’s a “true probability”, p, that any given project’s “AI will be created by us” claim is correct. And let’s model p as being identical for all projects and times. Then, if we assume a uniform prior over p, and if n AI projects that have been tried to date have failed to deliver, we should assign a probability of ((1+n)/n+2) to the chance that the next project from which AI is forecast will also fail to deliver. (You can work this out by an integral, or just plug into Laplace’s rule of succession).
If people have been forecasting AI since about 1950, and if the rate of forecasts or AI projects per decade has been more or less unchanged, the above reference class forecasting model leaves us with something like a 1/[number of decades since 1950 + 2] = 1⁄8 probability of some “our project will make AI” forecast being correct in the next decade.
That said, I still take issue with reference class forecasting as support for this statement:
I don’t believe in feasibility of any scenario like AGI foom.
Considering that the general question “is the foom scenario feasible?” doesn’t have any concrete timelines attached to it, the speed and direction of AI research don’t bear too heavily on it. All you can say about it based on reference class forecasting is that it’s a long way away if it’s both possible and requires much AI research progress.
Even if AGI happens, it is extraordinarily unlikely it will be any kind of foom, again based on outside view argument that virtually none of disruptive technologies were ever foom-like.
I’m not sure “disruptive technology” is the obvious category for AGI. The term basically dereferences to “engineered human-level intelligence”, easily suggesting comparisons to various humans, hominids, primates, etc.
I don’t know if inside view forecasting can ever be more reliable than outside view forecasting. It seems that insiders as a general and very robust rule tend to be strongly overconfident, and see all kinds of reason why their particular instance is different and will have better outcome than the reference class.
I don’t know if inside view forecasting can ever be more reliable than outside view forecasting. It seems that insiders as a general and very robust rule tend to be strongly overconfident, and see all kinds of reason why their particular instance is different and will have better outcome than the reference class.
Try applying that to physics, engineering, biology, or any other technical field. In many cases, the outside view doesn’t stand a chance.
Then, as a necessary condition (leaving other risks from the discussion for the moment), you either don’t believe in the feasibility of AGI, or you believe in the objective morality, which any AGI will “discover”. Which one is that?
I don’t believe in feasibility of any scenario like AGI foom.
First, I fail to see how anybody taking an outside view on AI research—which is a clear instance of class of sciences with extraordinary claims and very long history of failure to deliver in spite of unusually adequate funding—can think otherwise—to me it all seems like extreme case of insider bias to assign non-negligible probabilities to scenarios like that. Virtually none sciences with this characteristics delivered what they promised (even if they delivered something useful and vaguely related).
Even if AGI happens, it is extraordinarily unlikely it will be any kind of foom, again based on outside view argument that virtually none of disruptive technologies were ever foom-like.
Both extraordinarily unlikely events would have to occur before we would be exposed to risk of AGI-caused destruction of humanity, which even in this case is far from certain.
It seems like you’re reversing stupidity here. What correlation does a failed prediction have with the future?
It’s not reverse stupidity—it’s “reference class forecasting”, which is a more specific instance of our generic “outside view” concept. I gather data about AI research as an instance, look at other cases with similar characteristics (hyped overpromised and underdelivered over a very long time span) and estimate based on that. It is proven to work better than inside view of estimating based on details of a particular case.
http://en.wikipedia.org/wiki/Reference_class_forecasting
I agree that reference class forecasting is reasonable here. I disagree that you can get anything like the 99.999% probability you claim from applying reference class forecasting to AI projects. Since rare events happen, well, rarely, it would take an exceedingly large data-set before an “outside view” or frequency-based analysis would imply that our actual expected rate should be placed as low as your stated 0.001%. (If I flip a coin with unknown weighting 20 times, and get no heads, I should conclude that heads are probably rare, but my notion of “rare” here should be on the order of 1 in 20, not of 1 in 100,000.)
With more precision: let’s say that there’s a “true probability”, p, that any given project’s “AI will be created by us” claim is correct. And let’s model p as being identical for all projects and times. Then, if we assume a uniform prior over p, and if n AI projects that have been tried to date have failed to deliver, we should assign a probability of ((1+n)/n+2) to the chance that the next project from which AI is forecast will also fail to deliver. (You can work this out by an integral, or just plug into Laplace’s rule of succession).
If people have been forecasting AI since about 1950, and if the rate of forecasts or AI projects per decade has been more or less unchanged, the above reference class forecasting model leaves us with something like a 1/[number of decades since 1950 + 2] = 1⁄8 probability of some “our project will make AI” forecast being correct in the next decade.
Oops. You’re totally right.
That said, I still take issue with reference class forecasting as support for this statement:
Considering that the general question “is the foom scenario feasible?” doesn’t have any concrete timelines attached to it, the speed and direction of AI research don’t bear too heavily on it. All you can say about it based on reference class forecasting is that it’s a long way away if it’s both possible and requires much AI research progress.
I’m not sure “disruptive technology” is the obvious category for AGI. The term basically dereferences to “engineered human-level intelligence”, easily suggesting comparisons to various humans, hominids, primates, etc.
A reasonable position, so long as you remain truly ignorant of what AI is specifically about.
I don’t know if inside view forecasting can ever be more reliable than outside view forecasting. It seems that insiders as a general and very robust rule tend to be strongly overconfident, and see all kinds of reason why their particular instance is different and will have better outcome than the reference class.
http://www.overcomingbias.com/2007/07/beware-the-insi.html
http://en.wikipedia.org/wiki/Reference_class_forecasting
Try applying that to physics, engineering, biology, or any other technical field. In many cases, the outside view doesn’t stand a chance.