Most people trying to figure out what’s true should be mostly trying to develop views on the basis of public information and not giving too much weight to supposed secret information.
It’s good to react skeptically to someone claiming “we have secret information implying that what we are doing is super important.”
Understanding the sociopolitical situation seems like a worthwhile step in informing views about AI.
It would be wild if 73% of tech executives thought AGI would be developed in the next 10 years. (And independent of the truth of that claim, people do have a lot of wild views about automation.)
I disagree with:
Norms of discourse in the broader community are significantly biased towards short timelines. The actual evidence in this post seems thin and cherry-picked. I think the best evidence is the a priori argument “you’d expect to be biased towards short timelines given that it makes our work seem more important.” I think that’s good as far as it goes but the conclusion is overstated here.
“Whistleblowers” about long timelines are ostracized or discredited. Again, the evidence in your post seems thin and cherry-picked, and your contemporary example seems wrong to me (I commented separately). It seems like most people complaining about deep learning or short timelines have a good time in the AI community, and people with the “AGI in 20 years” view are regarded much more poorly within academia and most parts of industry. This could be about different fora and communities being in different equilibria, but I’m not really sure how that’s compatible with “ostracizing.” (It feels like you are probably mistaken about the tenor of discussions in the AI community.)
That 73% of tech executives thought AGI would be developed in the next 10 years. Willing to bet against the quoted survey: the white paper is thin on details and leaves lots of wiggle room for chicanery, while the project seems thoroughly optimized to make AI seem like a big deal soon. The claim also just doesn’t seem to match my experience with anyone who might be called tech executives (though I don’t know how they constructed the group).
Definitely agree that the AI community is not biased towards short timelines. Long timelines are the dominant view, while the short timelines view is associated with hype. Many researchers are concerned about the field losing credibility (and funding) if the hype bubble bursts, and this is especially true for those who experienced the AI winters. They see the long timelines view as appropriately skeptical and more scientifically respectable.
Some examples of statements that AGI is far away from high-profile AI researchers:
I agree with:
Most people trying to figure out what’s true should be mostly trying to develop views on the basis of public information and not giving too much weight to supposed secret information.
It’s good to react skeptically to someone claiming “we have secret information implying that what we are doing is super important.”
Understanding the sociopolitical situation seems like a worthwhile step in informing views about AI.
It would be wild if 73% of tech executives thought AGI would be developed in the next 10 years. (And independent of the truth of that claim, people do have a lot of wild views about automation.)
I disagree with:
Norms of discourse in the broader community are significantly biased towards short timelines. The actual evidence in this post seems thin and cherry-picked. I think the best evidence is the a priori argument “you’d expect to be biased towards short timelines given that it makes our work seem more important.” I think that’s good as far as it goes but the conclusion is overstated here.
“Whistleblowers” about long timelines are ostracized or discredited. Again, the evidence in your post seems thin and cherry-picked, and your contemporary example seems wrong to me (I commented separately). It seems like most people complaining about deep learning or short timelines have a good time in the AI community, and people with the “AGI in 20 years” view are regarded much more poorly within academia and most parts of industry. This could be about different fora and communities being in different equilibria, but I’m not really sure how that’s compatible with “ostracizing.” (It feels like you are probably mistaken about the tenor of discussions in the AI community.)
That 73% of tech executives thought AGI would be developed in the next 10 years. Willing to bet against the quoted survey: the white paper is thin on details and leaves lots of wiggle room for chicanery, while the project seems thoroughly optimized to make AI seem like a big deal soon. The claim also just doesn’t seem to match my experience with anyone who might be called tech executives (though I don’t know how they constructed the group).
Definitely agree that the AI community is not biased towards short timelines. Long timelines are the dominant view, while the short timelines view is associated with hype. Many researchers are concerned about the field losing credibility (and funding) if the hype bubble bursts, and this is especially true for those who experienced the AI winters. They see the long timelines view as appropriately skeptical and more scientifically respectable.
Some examples of statements that AGI is far away from high-profile AI researchers:
Geoffrey Hinton: https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis-hassabis-agi-is-nowhere-close-to-being-a-reality/
Yann LeCun: https://www.facebook.com/yann.lecun/posts/10153426023477143 https://futurism.com/conscious-ai-decades-away https://www.facebook.com/yann.lecun/posts/10153368458167143
Yoshua Bengio: https://www.lesswrong.com/posts/4qPy8jwRxLg9qWLiG/yoshua-bengio-on-ai-progress-hype-and-risks
Rodney Brooks: https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ https://rodneybrooks.com/agi-has-been-delayed/