This seems overconfident and somewhat misinformed to me.
First of all, it seems reasonable to wager that Google servers already do a lot of work when traffic is light: log aggregation, data mining, etc.
Secondly, you make the assumption that the AI team would run the AI on Google’s “empty space”. That implies a huge number of unspoken assumptions. I’d expect a Google AI team to have a (huge) allocation of resources that they utilize at all times.
Thirdly, it’s quite a leap to jump from “there’s slightly less server load at these hours” to “therefore an AI would go super-intelligent in these hours”. To make such a statement at your level of expressed confidence (with little to no support) strikes me as brazen and arrogant.
Finally, I don’t see how it decreases risk of a foom. If you already believed that a small AI could foom given a large portion of the world’s resources, then it seems like an AI that starts out with massive computing power should foom even faster.
(The “fooming” of a brute-force AI with a huge portion of the world’s resources involves sharply reducing resource usage while maintaining or expanding resource control.)
If you’re already afraid of small kludge AIs, shouldn’t you be even more afraid of large kludge AIs? If you believe that small-AI is both possible and dangerous, then surely you should be even more afraid of large-AI searching for small-AI with a sizable portion of the world’s resources already in hand. It seems to me like an AI with all of Google’s servers available is likely to find the small-AI faster than a team of human researchers: it already has extraordinary computing power, and it’s likely to have insights that humans are incapable of.
If that’s the case, then a monolithic Google AI is bad news.
First of all, it seems reasonable to wager that Google servers already do a lot of work when traffic is light: log aggregation, data mining, etc.
That’s why I said “one or two orders of magnitude”.
Thirdly, it’s quite a leap to jump from “there’s slightly less server load at these hours” to “therefore an AI would go super-intelligent in these hours”. To make such a statement at your level of expressed confidence (with little to no support) strikes me as brazen and arrogant.
Thank you. What, you think I believe what I said? I’m a Bayesian. Show me where I expressed a confidence level in that post.
If you already believed that a small AI could foom given a large portion of the world’s resources, then it seems like an AI that starts out with massive computing power should foom even faster.
One variant of the “foom” argument is that software that is “about as intelligent as a human” and runs on a desktop can escape into the Internet and augment its intelligence not by having insights into how to recode itself, but just by getting orders of magnitude more processing power. That then enables it to improve its own code, starting from software no smarter than a human.
If the software can’t grab many more computational resources than it was meant to run with, because those resources don’t exist, that means it has to foom on raw intelligence. That raises the minimum intelligence needed for FOOM to the superhuman level.
If you believe that small-AI is both possible and dangerous, then surely you should be even more afraid of large-AI searching for small-AI with a sizable portion of the world’s resources already in hand.
No. That’s the point of the article! “AI” indicates a program of roughly human intelligence. The intelligence needed to count as AI, and to start an intelligence explosion, is constant. Small AI and large AI have the same level of effective intelligence. A small AI needs to be written in a much more clever manner, to get the same performance out of a desktop as out of the Google data centers. When it grabs a million times more computational power, it will be much more intelligent than a Google AI that started out with the same intelligence when running on a million servers.
That’s why I said ’one or two orders of magnitude”
That’s not the part of your post I was criticizing. I was criticizing this:
And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.
Which doesn’t seem to be a good model of how Google servers work.
Show me where I expressed a confidence level in that post.
Confidence in English can be expressed non-numerically. Here’s a few sentences that seemed brazenly overconfident to me:
I know when the singularity will occur
(Sensationalized title.)
I can give you 2.3 bits of further information on when the Singularity will occur
(The number of significant digits you’re counting on your measure of transmitted information implies confidence that I don’t think you should possess.)
So the first bootstrapping AI will be created at Google. It will be designed to use Google’s massive distributed server system. And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.
(I understand that among Bayesians there is no certainty, and that a statement of fact should be taken as a statement of high confidence. I did not take this paragraph to express certainty: however, it surely seems to express higher confidence than your arguments merit.)
One variant of the “foom” argument is that software that is “about as intelligent as a human” and runs on a desktop can escape … If the software can’t grab many more computational resources than it was meant to run with, because those resources don’t exist, that means it has to foom on raw intelligence … A small AI needs to be written in a much more clever manner …
Did you even read my counter-argument?
It seems to me like an AI with all of Google’s servers available is likely to find the small-AI faster than a team of human researchers: it already has extraordinary computing power, and it’s likely to have insights that humans are incapable of.
I concede that a large-AI could foom slower than a small-AI, if decreasing resources usage is harder than resource acquisition. You haven’t supported this (rather bold) claim. Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling, which doesn’t seem obvious to me.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise. A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
A more important implication is that this scenario decreases the possibility of FOOM
I don’t buy it. At best, it doesn’t foom as fast as a small-AI could. Even then, it still seems to drastically increase the probability of a foom.
The confidence I expressed linguistically was to avoid making the article boring. It shouldn’t matter to you how confident I am anyway. Take the ideas and come up with your own probabilities.
The key point, as far as I’m concerned, is that an AI built by a large corporation for a large computational grid doesn’t have this easy FOOM path open to it: Stupidly add orders of magnitude of resources; get smart; THEN redesign self. So size of entity that builds the first AI is a crucial variable in thinking about foom scenarios.
I consider it very possible that the probability distribution of dollars-that-will-be-spent-to-build-the-first-AI has a power-law distribution, and hence is be dominated by large corporations, so that scenarios involving them should have more weight in your estimations than scenarios involving lone wolf hackers, no matter how many of those hackers there are.
Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling
I do think resource-usage reduction is far more difficult than scaling. The former requires radically new application-specific algorithms; the latter uses general solutions that Google is already familiar with. In fact, I’ll go out on a limb here and say I know (for Bayesian values of the word “know”) resource-usage reduction is far more difficult than scaling. Scaling is pretty routine and goes on on a continual basis for every major website & web application. Reducing order-of-complexity of an algorithm is a thing that happens every 10 years or so, and is considered publication-worthy (which scaling is not).
My argument has larger consequences (greater FOOM delay) if this is true, but it doesn’t depend on it to imply some delay. The big AI has to scale itself down a very great deal simply to be as resource-efficient as the small AI. After doing so, it is then in exactly the same starting position as the small AI. So foom is delayed by however long it takes a big AI to scale itself down to a small AI.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise.
Yes, foom at an earlier date. But a foom with more advance warning, at least to someone.
A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
No; the large AI is the first AI built, and is therefore roughly as smart as a human, whether it is big or small.
This seems overconfident and somewhat misinformed to me.
First of all, it seems reasonable to wager that Google servers already do a lot of work when traffic is light: log aggregation, data mining, etc.
Secondly, you make the assumption that the AI team would run the AI on Google’s “empty space”. That implies a huge number of unspoken assumptions. I’d expect a Google AI team to have a (huge) allocation of resources that they utilize at all times.
Thirdly, it’s quite a leap to jump from “there’s slightly less server load at these hours” to “therefore an AI would go super-intelligent in these hours”. To make such a statement at your level of expressed confidence (with little to no support) strikes me as brazen and arrogant.
Finally, I don’t see how it decreases risk of a foom. If you already believed that a small AI could foom given a large portion of the world’s resources, then it seems like an AI that starts out with massive computing power should foom even faster.
(The “fooming” of a brute-force AI with a huge portion of the world’s resources involves sharply reducing resource usage while maintaining or expanding resource control.)
If you’re already afraid of small kludge AIs, shouldn’t you be even more afraid of large kludge AIs? If you believe that small-AI is both possible and dangerous, then surely you should be even more afraid of large-AI searching for small-AI with a sizable portion of the world’s resources already in hand. It seems to me like an AI with all of Google’s servers available is likely to find the small-AI faster than a team of human researchers: it already has extraordinary computing power, and it’s likely to have insights that humans are incapable of.
If that’s the case, then a monolithic Google AI is bad news.
(Disclosure: I write software at Google.)
That’s why I said “one or two orders of magnitude”.
Thank you. What, you think I believe what I said? I’m a Bayesian. Show me where I expressed a confidence level in that post.
One variant of the “foom” argument is that software that is “about as intelligent as a human” and runs on a desktop can escape into the Internet and augment its intelligence not by having insights into how to recode itself, but just by getting orders of magnitude more processing power. That then enables it to improve its own code, starting from software no smarter than a human.
If the software can’t grab many more computational resources than it was meant to run with, because those resources don’t exist, that means it has to foom on raw intelligence. That raises the minimum intelligence needed for FOOM to the superhuman level.
No. That’s the point of the article! “AI” indicates a program of roughly human intelligence. The intelligence needed to count as AI, and to start an intelligence explosion, is constant. Small AI and large AI have the same level of effective intelligence. A small AI needs to be written in a much more clever manner, to get the same performance out of a desktop as out of the Google data centers. When it grabs a million times more computational power, it will be much more intelligent than a Google AI that started out with the same intelligence when running on a million servers.
Well, log2(24/5) = 2.26. You offered 2.3 bits of further information. It seems like a bit more than 100% confidence… ;-)
Yes, I said 2.3 bits. You got me. I am not really offering 2.3 bits. Just my two cents.
That’s not the part of your post I was criticizing. I was criticizing this:
Which doesn’t seem to be a good model of how Google servers work.
Confidence in English can be expressed non-numerically. Here’s a few sentences that seemed brazenly overconfident to me:
(Sensationalized title.)
(The number of significant digits you’re counting on your measure of transmitted information implies confidence that I don’t think you should possess.)
(I understand that among Bayesians there is no certainty, and that a statement of fact should be taken as a statement of high confidence. I did not take this paragraph to express certainty: however, it surely seems to express higher confidence than your arguments merit.)
Did you even read my counter-argument?
I concede that a large-AI could foom slower than a small-AI, if decreasing resources usage is harder than resource acquisition. You haven’t supported this (rather bold) claim. Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling, which doesn’t seem obvious to me.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise. A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
I don’t buy it. At best, it doesn’t foom as fast as a small-AI could. Even then, it still seems to drastically increase the probability of a foom.
The confidence I expressed linguistically was to avoid making the article boring. It shouldn’t matter to you how confident I am anyway. Take the ideas and come up with your own probabilities.
The key point, as far as I’m concerned, is that an AI built by a large corporation for a large computational grid doesn’t have this easy FOOM path open to it: Stupidly add orders of magnitude of resources; get smart; THEN redesign self. So size of entity that builds the first AI is a crucial variable in thinking about foom scenarios.
I consider it very possible that the probability distribution of dollars-that-will-be-spent-to-build-the-first-AI has a power-law distribution, and hence is be dominated by large corporations, so that scenarios involving them should have more weight in your estimations than scenarios involving lone wolf hackers, no matter how many of those hackers there are.
I do think resource-usage reduction is far more difficult than scaling. The former requires radically new application-specific algorithms; the latter uses general solutions that Google is already familiar with. In fact, I’ll go out on a limb here and say I know (for Bayesian values of the word “know”) resource-usage reduction is far more difficult than scaling. Scaling is pretty routine and goes on on a continual basis for every major website & web application. Reducing order-of-complexity of an algorithm is a thing that happens every 10 years or so, and is considered publication-worthy (which scaling is not).
My argument has larger consequences (greater FOOM delay) if this is true, but it doesn’t depend on it to imply some delay. The big AI has to scale itself down a very great deal simply to be as resource-efficient as the small AI. After doing so, it is then in exactly the same starting position as the small AI. So foom is delayed by however long it takes a big AI to scale itself down to a small AI.
Yes, foom at an earlier date. But a foom with more advance warning, at least to someone.
No; the large AI is the first AI built, and is therefore roughly as smart as a human, whether it is big or small.