Do you know if Andrew Ng or Yann LeCun has made a specific prediction that AGI won’t arrive by some date? Couldn’t find it through a quick search. Idk what others to include.
In his AI Insight Forum statement, Andrew Ng puts 1% on “This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity” in the next 100 years (conditional on a rogue AI system that doesn’t go unchecked by other AI systems existing). And overall 1 in 10 million of AI causing extinction in the next 100 years.
Do you know if Andrew Ng or Yann LeCun has made a specific prediction that AGI won’t arrive by some date? Couldn’t find it through a quick search. Idk what others to include.
In his AI Insight Forum statement, Andrew Ng puts 1% on “This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity” in the next 100 years (conditional on a rogue AI system that doesn’t go unchecked by other AI systems existing). And overall 1 in 10 million of AI causing extinction in the next 100 years.
Thanks, added.
I don’t know. But here’s an example of the sort of thing I’m talking about: Transformative AGI by 2043 is <1% likely — LessWrong
More generally you can probably find people expressing strong disagreement or outright dismissal of various short-timelines predictions.
Ok, I added this prediction.