which shows how incoherent and contradictory people are – they expect superintelligence before human-level AI, what questions are they answering here?
“the road to superintelligence goes not via human equivalence, but around it”
so, yes, it’s reasonable to expect to have wildly superintelligent AI systems (e.g. clearly superintelligent AI researchers and software engineers) before all important AI deficits compared to human abilities are patched
“the road to superintelligence goes not via human equivalence, but around it”
so, yes, it’s reasonable to expect to have wildly superintelligent AI systems (e.g. clearly superintelligent AI researchers and software engineers) before all important AI deficits compared to human abilities are patched
Visual representation of what you mean (imagine the red border doesn’t strictly dominate blue) from an AI Impacts blog post by Katja Grace: