I find it interesting how he says that there is no such thing as AGI, but acknowledges that machines will “eventually surpass human intelligence in all domains where humans are intelligent” as that would meet most people’s definition of AGI.
I don’t see how saying that machines will “eventually surpass human intelligence in all domains where humans are intelligent” imply the G in AGI.
I don’t see how saying that machines will “eventually surpass human intelligence in all domains where humans are intelligent” imply the G in AGI.
Oh, so you’re suggesting that he thinks they’ll be separate AI’s?
That’s what I understood when I read this sentence yes.