Yeah, human-level is supposed to mean not strongly superhuman at anything important, while also not being strongly subhuman in anything important.
I think that’s roughly the concept Nick Bostrom used in Superintelligence when discussing takeoff dynamics. (The usage of that concept is my only major disagreement with that book.) IMO it would be very surprising if the first ML system that is not strongly subhuman at anything important would not be strongly superhuman at anything important (assuming this property is not optimized for).
The most capable humans are often much more capable then the average and thus not superhuman. I remember the example of a hacker who gave a talk at the CCC about how he was in vacation in Taiwan and hacked their electronic payment system on the side. If you could scale him up 10,000 or 100,000 times the kind of cyberwar you could wage would be enormous.
OK, fair enough.
Yeah, human-level is supposed to mean not strongly superhuman at anything important, while also not being strongly subhuman in anything important.
I think that’s roughly the concept Nick Bostrom used in Superintelligence when discussing takeoff dynamics. (The usage of that concept is my only major disagreement with that book.) IMO it would be very surprising if the first ML system that is not strongly subhuman at anything important would not be strongly superhuman at anything important (assuming this property is not optimized for).
Yeah, I think I agree with that. Nice.
The most capable humans are often much more capable then the average and thus not superhuman. I remember the example of a hacker who gave a talk at the CCC about how he was in vacation in Taiwan and hacked their electronic payment system on the side. If you could scale him up 10,000 or 100,000 times the kind of cyberwar you could wage would be enormous.