This seems like way too high a bar. It seems clear that you can have transformative or risky AI systems that are still worse than humans at some tasks. This seems like the most likely outcome to me.
I think this is what Yudkowsky thinks also? (As for why it was relevant to bring up, Yudkowsky was answering the host’s question of “How is superintelligence different than general intelligence?”)
I think this is what Yudkowsky thinks also? (As for why it was relevant to bring up, Yudkowsky was answering the host’s question of “How is superintelligence different than general intelligence?”)