To be more precise, it was 40m to simulate 1% of the neocortex.
Using Moores law we can postulate that it takes 17 years to increase computational power a thousand fold and 34 years to increase it a million times. So that should give you more intuïtion of what 1% actually means. In the course of a couple decades it would take 4 minutes to simulate 1 second of an entire neocortext (not the entire brain).
That doesn’t sound too impressive either, but bear in mind that human brain <> strong AI. We are talking here about the physics model of the human brain, not the software architecture of an acutal AI. We could make it a million times more efficient if we trim the fat and keep the essence.
Our brains aren’t the ultimate authority on intelligence. Computers already are much better at arithmetic, memory and data transmission.
This isn’t considered to be intelligence by itself, but amplifies the ability of any AI at a much larger scale. For instance, Watson isn’t all that smart because he had to read the entire Wikipedia and a lot of other sources before he could beat people on Jeopardy. But… he did read the entire Wikipedia, which is something no human has ever done.
You are extrapolating Moore’s law out almost as far as it’s been in existence!
Yeah.
Transistor densities can’t increase much further due to fundamental physical limits. The chip makers all predict that they will not be able to continue at the same rate (and been predicting that for ages).
Interestingly the feature sizes are roughly the same order of magnitude for brains and chips now (don’t look at the neuron sizes, by the way, a neuron does far, far more than a transistor).
What we can do is building chips in multiple layers, but because making a layer is the bottleneck, not the physical volume, that won’t help a whole lot with costs. Transistors are also faster, but produce more heat, and efficiency wise not much ahead (if at all).
Bottom line is, even without the simulation penalty, it’s way off.
In near term, we can probably hack together some smaller neural network (or homologous graph-based thing) hard wired to interface with some language libraries, and have it fool people into superficially believing it’s not a complete idiot. It can also be very useful when connected together with something like mathematica.
But walking around in the world and figuring out that the stone can be chipped to be sharper, figuring out that it can be attached to a stick—the action space where such inventions lie is utterly enormous. Keep in mind that we humans are not merely intelligent. We are intelligent enough to overcome the starting hurdle while terribly inbred, full of parasites, and constantly losing knowledge. (Picking a good action out of an enormous action space is the kind of thing that requires a lot of computational power). Far simpler intelligence could do great things as a part of human society where many of the existing problems had their solution space trimmed already to a much more manageable size.
No one understands the brain well enough to actually do it, but I’d be astonished if this simulation weren’t doing a lot of redundant, unnecessary computations.
To be more precise, it was 40m to simulate 1% of the neocortex.
Using Moores law we can postulate that it takes 17 years to increase computational power a thousand fold and 34 years to increase it a million times. So that should give you more intuïtion of what 1% actually means. In the course of a couple decades it would take 4 minutes to simulate 1 second of an entire neocortext (not the entire brain).
That doesn’t sound too impressive either, but bear in mind that human brain <> strong AI. We are talking here about the physics model of the human brain, not the software architecture of an acutal AI. We could make it a million times more efficient if we trim the fat and keep the essence.
Our brains aren’t the ultimate authority on intelligence. Computers already are much better at arithmetic, memory and data transmission.
This isn’t considered to be intelligence by itself, but amplifies the ability of any AI at a much larger scale. For instance, Watson isn’t all that smart because he had to read the entire Wikipedia and a lot of other sources before he could beat people on Jeopardy. But… he did read the entire Wikipedia, which is something no human has ever done.
You are extrapolating Moore’s law out almost as far as it’s been in existence!
It’s nice to think that, but no one understands the brain well enough to make claims like that yet.
Yeah.
Transistor densities can’t increase much further due to fundamental physical limits. The chip makers all predict that they will not be able to continue at the same rate (and been predicting that for ages).
Interestingly the feature sizes are roughly the same order of magnitude for brains and chips now (don’t look at the neuron sizes, by the way, a neuron does far, far more than a transistor).
What we can do is building chips in multiple layers, but because making a layer is the bottleneck, not the physical volume, that won’t help a whole lot with costs. Transistors are also faster, but produce more heat, and efficiency wise not much ahead (if at all).
Bottom line is, even without the simulation penalty, it’s way off.
In near term, we can probably hack together some smaller neural network (or homologous graph-based thing) hard wired to interface with some language libraries, and have it fool people into superficially believing it’s not a complete idiot. It can also be very useful when connected together with something like mathematica.
But walking around in the world and figuring out that the stone can be chipped to be sharper, figuring out that it can be attached to a stick—the action space where such inventions lie is utterly enormous. Keep in mind that we humans are not merely intelligent. We are intelligent enough to overcome the starting hurdle while terribly inbred, full of parasites, and constantly losing knowledge. (Picking a good action out of an enormous action space is the kind of thing that requires a lot of computational power). Far simpler intelligence could do great things as a part of human society where many of the existing problems had their solution space trimmed already to a much more manageable size.
No one understands the brain well enough to actually do it, but I’d be astonished if this simulation weren’t doing a lot of redundant, unnecessary computations.