Yes, it is a biased sample. However, reality is not a democracy: some people have better ideas than others.
Personally, I think that the within-SIAI view of AI takeoff timelines will suffer from bias: it’ll be emotionally tempted into putting down timelines that are too near term. But I don’t know how much to correct for this.
A primitive outside view analysis that I did indicates a ~50% probability of superintelligent AI by 2100.
take a log-normal prior for when AI at the human level will be developed, with t_0 at 1956. Choose the remaining two parameters to line up with the stated beliefs of the first AI researchers—i.e. they expected human level AI to not have occurred within a year, but they seem to have assigned significant probability to it happening by 1970. Then update that prior on the fact that, in 2010, we still have no human level AI.
This “outside view” model takes into account the evidence provided by the failure of the past 64 years of AI, and I think it is a reasonable model.
Now, the only point I do not understand yet is how the
expectations of the original AI researchers are a factor in
this. Do you have some reason to believe that their expectations
were too optimistic by a factor of about 10 (1970 vs 2100) rather
than some other number?
Now, the only point I do not understand yet is how the expectations of the original AI researchers are a factor in this
They are a factor because their opinions in 1956, before the data had been seen, form a basis for constructing a prior that was not causally affected by the data.
Yes, it is a biased sample. However, reality is not a democracy: some people have better ideas than others.
Personally, I think that the within-SIAI view of AI takeoff timelines will suffer from bias: it’ll be emotionally tempted into putting down timelines that are too near term. But I don’t know how much to correct for this.
A primitive outside view analysis that I did indicates a ~50% probability of superintelligent AI by 2100.
Could you elaborate a bit on this analysis? It’d be interesting how you arrived at that number.
take a log-normal prior for when AI at the human level will be developed, with t_0 at 1956. Choose the remaining two parameters to line up with the stated beliefs of the first AI researchers—i.e. they expected human level AI to not have occurred within a year, but they seem to have assigned significant probability to it happening by 1970. Then update that prior on the fact that, in 2010, we still have no human level AI.
This “outside view” model takes into account the evidence provided by the failure of the past 64 years of AI, and I think it is a reasonable model.
Thanks, that was indeed interesting.
Now, the only point I do not understand yet is how the expectations of the original AI researchers are a factor in this. Do you have some reason to believe that their expectations were too optimistic by a factor of about 10 (1970 vs 2100) rather than some other number?
They are a factor because their opinions in 1956, before the data had been seen, form a basis for constructing a prior that was not causally affected by the data.