It seems fairly likely that the median relevant representative expert would assign a probability over 5% but less than 50% to Legg/Kurzweil timelines, particularly if you factored out mysticism/religion-based skepticism.
Does this mean “For any given year, a relevant expert would only assign 1/20-1/2 the probability of FOOM by that year that Legg and Kurzweil do”? If not, what does it mean?
Shane Legg says that there is a 95% probability of human-level AI by 2045. Kurzweil doesn’t give probabilities, but claims high confidence in Turing Test passing AI by 2029 and a slow takeoff Singularity over the next two decades. I would bet that a representative sample of experts would assign less than 50% probability to human-level AI by 2045, but more than 5%.
Shane Legg says that there is a 95% probability of human-level AI by 2045.
I was surprised, his recent post didn’t leave me with this impression, and I didn’t remember the past well enough. But apparently this is correct, here’s the post and visualization of the prediction endorsed by Legg.
This is helpful. One question, though:
Does this mean “For any given year, a relevant expert would only assign 1/20-1/2 the probability of FOOM by that year that Legg and Kurzweil do”? If not, what does it mean?
Shane Legg says that there is a 95% probability of human-level AI by 2045. Kurzweil doesn’t give probabilities, but claims high confidence in Turing Test passing AI by 2029 and a slow takeoff Singularity over the next two decades. I would bet that a representative sample of experts would assign less than 50% probability to human-level AI by 2045, but more than 5%.
I was surprised, his recent post didn’t leave me with this impression, and I didn’t remember the past well enough. But apparently this is correct, here’s the post and visualization of the prediction endorsed by Legg.
Cool, thanks.