I’d like to make explicit the connection of this idea to hard takeoff, since it’s something I’ve thought about before but isn’t stated explicitly very often. Namely, this provides some reason to think that by the time an AGI is human-level in the things humans have evolved to do, it will be very superhuman in things that humans have more difficulty with, like math and engineering.
I’d like to make explicit the connection of this idea to hard takeoff, since it’s something I’ve thought about before but isn’t stated explicitly very often. Namely, this provides some reason to think that by the time an AGI is human-level in the things humans have evolved to do, it will be very superhuman in things that humans have more difficulty with, like math and engineering.