You are extrapolating Moore’s law out almost as far as it’s been in existence!
Yeah.
Transistor densities can’t increase much further due to fundamental physical limits. The chip makers all predict that they will not be able to continue at the same rate (and been predicting that for ages).
Interestingly the feature sizes are roughly the same order of magnitude for brains and chips now (don’t look at the neuron sizes, by the way, a neuron does far, far more than a transistor).
What we can do is building chips in multiple layers, but because making a layer is the bottleneck, not the physical volume, that won’t help a whole lot with costs. Transistors are also faster, but produce more heat, and efficiency wise not much ahead (if at all).
Bottom line is, even without the simulation penalty, it’s way off.
In near term, we can probably hack together some smaller neural network (or homologous graph-based thing) hard wired to interface with some language libraries, and have it fool people into superficially believing it’s not a complete idiot. It can also be very useful when connected together with something like mathematica.
But walking around in the world and figuring out that the stone can be chipped to be sharper, figuring out that it can be attached to a stick—the action space where such inventions lie is utterly enormous. Keep in mind that we humans are not merely intelligent. We are intelligent enough to overcome the starting hurdle while terribly inbred, full of parasites, and constantly losing knowledge. (Picking a good action out of an enormous action space is the kind of thing that requires a lot of computational power). Far simpler intelligence could do great things as a part of human society where many of the existing problems had their solution space trimmed already to a much more manageable size.
No one understands the brain well enough to actually do it, but I’d be astonished if this simulation weren’t doing a lot of redundant, unnecessary computations.
You are extrapolating Moore’s law out almost as far as it’s been in existence!
It’s nice to think that, but no one understands the brain well enough to make claims like that yet.
Yeah.
Transistor densities can’t increase much further due to fundamental physical limits. The chip makers all predict that they will not be able to continue at the same rate (and been predicting that for ages).
Interestingly the feature sizes are roughly the same order of magnitude for brains and chips now (don’t look at the neuron sizes, by the way, a neuron does far, far more than a transistor).
What we can do is building chips in multiple layers, but because making a layer is the bottleneck, not the physical volume, that won’t help a whole lot with costs. Transistors are also faster, but produce more heat, and efficiency wise not much ahead (if at all).
Bottom line is, even without the simulation penalty, it’s way off.
In near term, we can probably hack together some smaller neural network (or homologous graph-based thing) hard wired to interface with some language libraries, and have it fool people into superficially believing it’s not a complete idiot. It can also be very useful when connected together with something like mathematica.
But walking around in the world and figuring out that the stone can be chipped to be sharper, figuring out that it can be attached to a stick—the action space where such inventions lie is utterly enormous. Keep in mind that we humans are not merely intelligent. We are intelligent enough to overcome the starting hurdle while terribly inbred, full of parasites, and constantly losing knowledge. (Picking a good action out of an enormous action space is the kind of thing that requires a lot of computational power). Far simpler intelligence could do great things as a part of human society where many of the existing problems had their solution space trimmed already to a much more manageable size.
No one understands the brain well enough to actually do it, but I’d be astonished if this simulation weren’t doing a lot of redundant, unnecessary computations.