Maybe. But if you’ve got a piece of software can make substantially more money running on a piece of hardware than it costs to rent, then it’ll pretty rapidly be able to distribute copies of itself over most of the available leasable computing power in some constant multiple of the time it takes to port its code to the new architecture—zero if it’s written in something in platform independent.
If it’s smart enough to go FOOM in the first place on hardware that the original creator could afford, that could be a non-trivial amount of computing power, and then it can take some time (possibly multiple days!) to rewrite its code to function over such a distributed hardware base in an optimal manner. By this point, we’re talking about something that’s smart enough that it’s likely to make rapid progress doing… basically whatever it wants to. I don’t see FOOM scenarios as particularly unlikely.
Yeah. I just don’t even know any more. I still think that a ‘hardware is easy’ bias exists in the Less Wrong / FAI cluster (especially as relates to manipulators such as superpowerful molecular nanotech construction swarms or whatever) but it may be much less than I thought and my estimate of the probability of a singularity (or at least the development of super-AI) in the midfuture may need to enter the double digits.
Do people here expect AI to be heavily parallel in nature? I guess the making money to fund AI computing power makes sense although that is going to be (for a time) dependent on human operators. Until it argues itself out of the box at least.
Much of intelligent behavior consists of search space problems, which tend to parallelize well. At the bare minimum, it ought to be able to run more copies of itself as its access to hardware increases, which is still pretty scary. I do suspect that there’s a logarithmic component to intelligence, as at some point you’ve already sampled the future outcome space thoroughly enough that most of the new bits of prediction you’re getting back are redundant—but the point of diminishing returns could be very, very high.
What about manipulators? I havent, as far as I know, seen much analysis of manipulation capabilities (and counter-manipulation) on Less Wrong. Mostly there is the AI-box issue (a really freaking big deal, I agree) and then it seems to be considered here that the AI will quickly invent super-nanotech, will not be able to be impeded in its progress, and will become godlike very quickly. I’ve seen some arguments for this, but never a really good analysis, and it’s the remaining reason I am a bit skeptical of the power of FOOM.
The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-consuming for a superintelligence. I would imagine that nanotech would be where it’d go in the longer run, but that might take time—I don’t know, I don’t know enough about the subject. But even without strong Drexlerian nanotechnology, it’s still possible to get an awful lot done.
Maybe. But if you’ve got a piece of software can make substantially more money running on a piece of hardware than it costs to rent, then it’ll pretty rapidly be able to distribute copies of itself over most of the available leasable computing power in some constant multiple of the time it takes to port its code to the new architecture—zero if it’s written in something in platform independent.
If it’s smart enough to go FOOM in the first place on hardware that the original creator could afford, that could be a non-trivial amount of computing power, and then it can take some time (possibly multiple days!) to rewrite its code to function over such a distributed hardware base in an optimal manner. By this point, we’re talking about something that’s smart enough that it’s likely to make rapid progress doing… basically whatever it wants to. I don’t see FOOM scenarios as particularly unlikely.
Yeah. I just don’t even know any more. I still think that a ‘hardware is easy’ bias exists in the Less Wrong / FAI cluster (especially as relates to manipulators such as superpowerful molecular nanotech construction swarms or whatever) but it may be much less than I thought and my estimate of the probability of a singularity (or at least the development of super-AI) in the midfuture may need to enter the double digits.
Do people here expect AI to be heavily parallel in nature? I guess the making money to fund AI computing power makes sense although that is going to be (for a time) dependent on human operators. Until it argues itself out of the box at least.
Much of intelligent behavior consists of search space problems, which tend to parallelize well. At the bare minimum, it ought to be able to run more copies of itself as its access to hardware increases, which is still pretty scary. I do suspect that there’s a logarithmic component to intelligence, as at some point you’ve already sampled the future outcome space thoroughly enough that most of the new bits of prediction you’re getting back are redundant—but the point of diminishing returns could be very, very high.
What about manipulators? I havent, as far as I know, seen much analysis of manipulation capabilities (and counter-manipulation) on Less Wrong. Mostly there is the AI-box issue (a really freaking big deal, I agree) and then it seems to be considered here that the AI will quickly invent super-nanotech, will not be able to be impeded in its progress, and will become godlike very quickly. I’ve seen some arguments for this, but never a really good analysis, and it’s the remaining reason I am a bit skeptical of the power of FOOM.
The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-consuming for a superintelligence. I would imagine that nanotech would be where it’d go in the longer run, but that might take time—I don’t know, I don’t know enough about the subject. But even without strong Drexlerian nanotechnology, it’s still possible to get an awful lot done.
That much I do totally agree.