I find it interesting how he says that there is no such thing as AGI, but acknowledges that machines will “eventually surpass human intelligence in all domains where humans are intelligent” as that would meet most people’s definition of AGI.
The somewhat-reasonable-position-adjacent-to-what-Yann-believes would be: “I don’t like the term ‘AGI’. It gives the wrong idea. We should use a different term instead. I like ‘human-level AI’.”
I.e., it’s a purely terminological complaint. And it’s not a crazy one! Lots of reasonable people think that “AGI” was a poorly-chosen term, although I still think it’s possibly the least-bad option.
Yann’s actual rhetorical approach tends to be:
Step 1: (re)-define the term “AGI” in his own idiosyncratic and completely insane way;
Step 2: say there’s no such thing as “AGI” (as so defined), and that anyone who talks about AGI is a moron.
The somewhat-reasonable-position-adjacent-to-what-Yann-believes would be: “I don’t like the term ‘AGI’. It gives the wrong idea. We should use a different term instead. I like ‘human-level AI’.”
I.e., it’s a purely terminological complaint. And it’s not a crazy one! Lots of reasonable people think that “AGI” was a poorly-chosen term, although I still think it’s possibly the least-bad option.
Yann’s actual rhetorical approach tends to be:
Step 1: (re)-define the term “AGI” in his own idiosyncratic and completely insane way;
Step 2: say there’s no such thing as “AGI” (as so defined), and that anyone who talks about AGI is a moron.
I talk about it in much more detail here.