I am not sure how anyone would say that “[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI.” unless he hasn’t really followed the breakthroughs of the past few months or had read only bad secondhand reports
Strong AI, yes. True AI, probably not (that’s just my guess). I started following this fairly recently, can you (or someone) provide some links to articles/posts with updated predictions of timelines that factor in recent breakthroughs? How far are we from true AI?
I guess what I’m calling ‘true AI’ is not unlike the stated goal of general intelligence or AGI. As opposed to narrow AI (also called weak AI), true AI is what the average sci-fi fan thinks of as AI (movies such as ‘Ex Machina’, ’2001: A Space Odyssey’ or ‘Zoe’) who are seemingly conscious, exercise free will and demonstrate human-like cognitive degrees of freedom.
With recent breakthroughs it may be useful to separate those terms as we may have AGI soon but it will still be narrow in a lot of ways. True AI is still far off, in my opinion. I don’t think it’ll emerge directly from large language models but more likely from a new substrate that’s more dynamic than the current computer chips, circuit boards, semiconductors etc. The invention/discovery of that new substrate will be the biggest bottleneck to true AI.
I am not sure how anyone would say that “[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI.” unless he hasn’t really followed the breakthroughs of the past few months or had read only bad secondhand reports
Strong AI, yes. True AI, probably not (that’s just my guess). I started following this fairly recently, can you (or someone) provide some links to articles/posts with updated predictions of timelines that factor in recent breakthroughs? How far are we from true AI?
I think there were some not insignificant updates to Metaculus aggregate predictions for AGI timelines in the past of few months: https://www.lesswrong.com/posts/CiYSFaQvtwj98csqG/metaculus-predicts-weak-agi-in-2-years-and-agi-in-10#comments
What do you mean by true IA?
Y’know, True AI, like True Scotsmen.
I guess what I’m calling ‘true AI’ is not unlike the stated goal of general intelligence or AGI. As opposed to narrow AI (also called weak AI), true AI is what the average sci-fi fan thinks of as AI (movies such as ‘Ex Machina’, ’2001: A Space Odyssey’ or ‘Zoe’) who are seemingly conscious, exercise free will and demonstrate human-like cognitive degrees of freedom.
With recent breakthroughs it may be useful to separate those terms as we may have AGI soon but it will still be narrow in a lot of ways. True AI is still far off, in my opinion. I don’t think it’ll emerge directly from large language models but more likely from a new substrate that’s more dynamic than the current computer chips, circuit boards, semiconductors etc. The invention/discovery of that new substrate will be the biggest bottleneck to true AI.