Why can’t we just make a CPU as large as a dump truck, that can store a thousand petabytes, then run an AI and try to evolve intelligence? I can’t imagine that this is beyond the technology of 2015.
(Not that this would be a good idea, I’m just saying that it seems possible.)
Why can’t we just make a CPU as large as a dump truck [...?]
Lots of reasons, some of which Vaniver and ShardPhoenix have already given, but one of the big ones is that CPUs dissipate a truly enormous amount of heat for their size. Your average laptop I7 consumes about thirty watts, essentially all of which goes to heat one way or another, and it’s about a centimeter square (the chip you see on the motherboard is bigger, but a lot of that is connections and housing). Let’s call that about the size of a penny. That’s an overestimate, but as we’ll see, it won’t matter much.
Now, a quick Google tells me that a dump truck can hold about 20 cubic meters (=20000 liters), and that a liter holds about 2000 closely packed pennies. So if we assume something with around the same packing and thermal efficiency, our dump truck-sized CPU will be putting out about 30 2000 20000 = 1.2 gigawatts of heat, or a bit more than the combined peak output of the two nuclear reactors powering a Nimitz-class aircraft carrier.
There’s a limit to how large we can scale computers at any given tech level. What you’re talking about is basically what a supercomputer is (they have many CPUs rather than one huge one), but there’s still a limit to what’s practical with them.
What do you mean by “evolve intelligence”? Run evolutionary algorithms on random bits of code? How do you evaluate the results? Before you can use search algorithms you have to be able to define the target, which is most of the problem in this case, plus search is likely to be impractically slow in something as big as “the space of all programs”.
Having 1000+ petabytes is not impossible with our level of technology. It is somewhat nitpicky to focus rather on the physical absurdity of house-sized computers.
Run Watson, select the Watsons that can solve problems better.
1000 petabytes of what? RAM? How do you know that’s enough to do what you want anyway? My point at any rate is that we can’t grab a billion dollars and make some computer that is “fast enough to ‘evolve an AI’” just by throwing money at the problem—universities, companies and governments are spending money right now on supercomputers, and they still have limitations due to underlying technical issues like cooling and inter-processor communication (as the other commenters pointed out).
Watson is a big complex program, not some small DNA-like seed that can easily be mutated and iterated on automatically. There’s no known small seed that generates anything like a general intelligent agent (except of course DNA itself and the resulting biology which can’t be very efficiently simulated even with a supercomputer).
You’re assuming that someone, given a zillion dollars, could implement your plan, but if you don’t even know where to begin implementing it yourself, what reason do you have to believe someone else would?
Put another way, if “I can’t imagine we can’t [X] given the technology of 2015” works when X is “evolve artificial intelligence”, why wouldn’t it work for any other X you care to imagine?
You’re assuming that someone, given a zillion dollars, could implement your plan, but if you don’t even know where to begin implementing it yourself, what reason do you have to believe someone else would?
For example, because Eitan Zohar is not an expert of that.
I don’t know where I would start if I had to send a manned spaceship to Mars, but that doesn’t mean I expect nobody to know.
I don’t know where I would start if I had to send a manned spaceship to Mars, but that doesn’t mean I expect nobody to know.
Where does your confidence that somebody (or some distributed group of people) knows how to send a manned spacecraft to Mars come from? It’s not like anyone’s ever exhibited this knowledge before.
Something must make you think “hey, sending people to Mars is possible”. The important question as far as I am concerned is whether that’s a good-something or a bad-something. In the case of “evolving artificial intelligence with a computer the size of a dump truck must be possible”, I think it’s a bad-something.
People are working on going to Mars. AFAIK, the main barrier is the cost.
Back to the original question, I can imagine where to start with evolving intelligence, but I’d need much more than a petabyte. (although, actually flops are more important than bytes here, I think)
I think the relevance is that no presently living human being knows how to program an AI, whether with an evolutionary algorithm or in any other way, no matter how powerful the hardware they may have.
The AI problem is a software problem, and no one has yet solved it.
A thousand petabytes is probably enough to run one human-equivalent brain. In order to evolve intelligence, I’m guessing you would need to run thousands of brains for millions of generations.
I doubt it, since our actual brains run on less than a hundred terabytes (I’m not sure whether gray matter or CPU hardware is more efficient). Our brains also use a huge amount of that for things like emotion or body processes. We’re just looking for an AI that can create something more intelligent than itself.
10^11 neurons, 10^4 synapes per neuron—even if each synapse can be represented as a single 8 bit number (very optimistic), that’s a petabyte of storage needed. Bostrom puts a hundred terabytes as the lowest estimate, with spiking neural network at 10 petabytes. Metabolism being required too would push the estimate to an exabyte, and the more pessimistic (but less plausible models) go beyond this.
And yes, an AI might be more efficient than the brain, but if its being created by evolution then I don’t think it espeically likley that it will be more efficient than brains created by evolution.
Why can’t we just make a CPU as large as a dump truck, that can store a thousand petabytes, then run an AI and try to evolve intelligence? I can’t imagine that this is beyond the technology of 2015.
(Not that this would be a good idea, I’m just saying that it seems possible.)
Lots of reasons, some of which Vaniver and ShardPhoenix have already given, but one of the big ones is that CPUs dissipate a truly enormous amount of heat for their size. Your average laptop I7 consumes about thirty watts, essentially all of which goes to heat one way or another, and it’s about a centimeter square (the chip you see on the motherboard is bigger, but a lot of that is connections and housing). Let’s call that about the size of a penny. That’s an overestimate, but as we’ll see, it won’t matter much.
Now, a quick Google tells me that a dump truck can hold about 20 cubic meters (=20000 liters), and that a liter holds about 2000 closely packed pennies. So if we assume something with around the same packing and thermal efficiency, our dump truck-sized CPU will be putting out about 30 2000 20000 = 1.2 gigawatts of heat, or a bit more than the combined peak output of the two nuclear reactors powering a Nimitz-class aircraft carrier.
This poses certain design issues.
There’s a limit to how large we can scale computers at any given tech level. What you’re talking about is basically what a supercomputer is (they have many CPUs rather than one huge one), but there’s still a limit to what’s practical with them.
What do you mean by “evolve intelligence”? Run evolutionary algorithms on random bits of code? How do you evaluate the results? Before you can use search algorithms you have to be able to define the target, which is most of the problem in this case, plus search is likely to be impractically slow in something as big as “the space of all programs”.
Having 1000+ petabytes is not impossible with our level of technology. It is somewhat nitpicky to focus rather on the physical absurdity of house-sized computers.
Run Watson, select the Watsons that can solve problems better.
1000 petabytes of what? RAM? How do you know that’s enough to do what you want anyway? My point at any rate is that we can’t grab a billion dollars and make some computer that is “fast enough to ‘evolve an AI’” just by throwing money at the problem—universities, companies and governments are spending money right now on supercomputers, and they still have limitations due to underlying technical issues like cooling and inter-processor communication (as the other commenters pointed out).
Watson is a big complex program, not some small DNA-like seed that can easily be mutated and iterated on automatically. There’s no known small seed that generates anything like a general intelligent agent (except of course DNA itself and the resulting biology which can’t be very efficiently simulated even with a supercomputer).
If you, personally, were given a zillion dollars and told to implement this plan yourself, how would you do it?
No idea. What relevance does that have?
You’re assuming that someone, given a zillion dollars, could implement your plan, but if you don’t even know where to begin implementing it yourself, what reason do you have to believe someone else would?
Put another way, if “I can’t imagine we can’t [X] given the technology of 2015” works when X is “evolve artificial intelligence”, why wouldn’t it work for any other X you care to imagine?
For example, because Eitan Zohar is not an expert of that.
I don’t know where I would start if I had to send a manned spaceship to Mars, but that doesn’t mean I expect nobody to know.
Where does your confidence that somebody (or some distributed group of people) knows how to send a manned spacecraft to Mars come from? It’s not like anyone’s ever exhibited this knowledge before.
Something must make you think “hey, sending people to Mars is possible”. The important question as far as I am concerned is whether that’s a good-something or a bad-something. In the case of “evolving artificial intelligence with a computer the size of a dump truck must be possible”, I think it’s a bad-something.
People are working on going to Mars. AFAIK, the main barrier is the cost.
Back to the original question, I can imagine where to start with evolving intelligence, but I’d need much more than a petabyte. (although, actually flops are more important than bytes here, I think)
I think the relevance is that no presently living human being knows how to program an AI, whether with an evolutionary algorithm or in any other way, no matter how powerful the hardware they may have.
The AI problem is a software problem, and no one has yet solved it.
A thousand petabytes is probably enough to run one human-equivalent brain. In order to evolve intelligence, I’m guessing you would need to run thousands of brains for millions of generations.
I doubt it, since our actual brains run on less than a hundred terabytes (I’m not sure whether gray matter or CPU hardware is more efficient). Our brains also use a huge amount of that for things like emotion or body processes. We’re just looking for an AI that can create something more intelligent than itself.
10^11 neurons, 10^4 synapes per neuron—even if each synapse can be represented as a single 8 bit number (very optimistic), that’s a petabyte of storage needed. Bostrom puts a hundred terabytes as the lowest estimate, with spiking neural network at 10 petabytes. Metabolism being required too would push the estimate to an exabyte, and the more pessimistic (but less plausible models) go beyond this.
And yes, an AI might be more efficient than the brain, but if its being created by evolution then I don’t think it espeically likley that it will be more efficient than brains created by evolution.