The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-consuming for a superintelligence. I would imagine that nanotech would be where it’d go in the longer run, but that might take time—I don’t know, I don’t know enough about the subject. But even without strong Drexlerian nanotechnology, it’s still possible to get an awful lot done.
The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-consuming for a superintelligence. I would imagine that nanotech would be where it’d go in the longer run, but that might take time—I don’t know, I don’t know enough about the subject. But even without strong Drexlerian nanotechnology, it’s still possible to get an awful lot done.
That much I do totally agree.