Memristors are not essential for an artificial cortex to be built, not at all. They will give perhaps one order of magnitude performance improvement over using just transistors/capacitors if they work out as expected.
The functionality of circuits is not a function of the substrate (the building blocks), it’s a function of the circuit organization itself.
This is exactly my point. If memristors are cool but nonessential, why are they mentioned so prominently? You made it seem like they were more important than they are.
A bandwidth breakthrough such as optical interconnects is more important for the massive million fold speed advantage.
How confidant are we that this is close? What if there isn’t physically enough room to connect everything using the known methods?
Vaguely kindof—the Blue Brain project is the early precursor to building an artificial cortex. However it is far too detailed and too slow, it’s not a practical approach, not even close. It’s a learning project.
Well yes, obviously not in its current state. It might not be too detailed though; we don’t know how much detail is necessary.
How confidant are we that this is close? What if there isn’t physically enough room to connect everything using the known methods?
Using state of the art interconnect available today you’d probably be limited to something much more modest like 100-1000x max speedup. Of course I find it highly likely that interconnect will continue to improve.
It might not be too detailed though; we don’t know how much detail is necessary.
It’s rather obviously too detailed from the perspective of functionality. Blue Brain is equivalent to simulating a current CPU at the molecular level. We don’t want to do that, we just want to rebuild the CPU’s algorithms in a new equivalent circuit. Massive difference.
Using state of the art interconnect available today you’d probably be limited to something much more modest like 100-1000x max speedup. Of course I find it highly likely that interconnect will continue to improve.
That’s really interesting! Is there a prototype of this?
It’s rather obviously too detailed from the perspective of functionality. Blue Brain is equivalent to simulating a current CPU at the molecular level. We don’t want to do that, we just want to rebuild the CPU’s algorithms in a new equivalent circuit. Massive difference.
There is a difference. We know that the molecular-scale workings of a CPU don’t matter because it was designed by humans who wouldn’t be able to get the thing to work if they needed molecular precision. Evolution faces very different requirements. Intuitively, I think it is likely that some things can be optimized out, but I think it is very easy to overestimate here.
Using state of the art interconnect available today you’d probably be limited to something much more modest like 100-1000x max speedup . .
That’s really interesting! Is there a prototype of this?
A prototype of what? The cortex has roughly 20 billion neurons organized into perhaps a million columns. The connections follow the typical inverse power law with distance and most of the connectivity is fairly local. Assuming about 5% of the connections are long distance inter-regional, an averaged firing rate of about one spike per second and efficient encoding gives on the order of one GB/s of aggregate inter-regional bandwidth. This isn’t that much. It’s a ludicrous amount of wiring when you open up the brain—all the white matter, but each connection is very slow.
So this isn’t a limiting factor for real-time simulation. The bigger limiting factor for real-time simulation is the memory bottleneck of just getting massive quantities of synaptic data onto each GPU’s local memory.
But if memristors or other techniques surmount that principle memory bandwidth limitation, the interconnect eventually becomes a limitation. A 1000x speedup would equate to 1 TB/s of aggregate interconnect bandwidth. This is still reasonable for a a few hundred computers connected via the fastest current point to point links such as 100gb ethernet (10 GB/s each roughly * 100 node to node edges).
It’s rather obviously too detailed from the perspective of functionality. Blue Brain is equivalent to simulating a current CPU at the molecular level. We don’t want to do that, we just want to rebuild the CPU’s algorithms in a new equivalent circuit. Massive difference.
We know that the molecular-scale workings of a CPU don’t matter because it was designed by humans who wouldn’t be able to get the thing to work if they needed molecular precision.
Intuitively, I think it is likely that some things can be optimized out, but I think it is very easy to overestimate here.
If you are reverse engineering a circuit, you may initially simulate it at a really low level, perhaps even the molecular level, to get a good understanding of how it’s logic family and low level dynamics work. But the only point of that is to figure out what the circuit is doing. Once you figure that out you can apply those principles to build something similar.
I’d say at this point we know what the cortex does at the abstract level: hierarchical bayesian inference. We even know the specific types of computations it does to approximate this—for example see the work of Poggio’s CBCL group at MIT. They have built computational models of the visual cortex that are closing in on completeness in terms of the main computations the canonical cortical circuit can perform.
So we do know the principle functionality and underlying math of the cortex now. The network level organization above that is still less understood, but understanding the base level allows us to estimate the computational demands of creating a full cortex and it’s basically just what you’d expect (very roughly 1 low precision mad per synapse weight per update).
A computer that used the most advanced interconnects available today to be more parallel than normal computers.
If you are reverse engineering a circuit, you may initially simulate it at a really low level, perhaps even the molecular level, to get a good understanding of how it’s logic family and low level dynamics work. But the only point of that is to figure out what the circuit is doing. Once you figure that out you can apply those principles to build something similar.
The only reason this works is because humans built circuits. If their behaviour was too complex, we would not be able to design them to do what we want. A neuron can use arbitrarily complex calculations, because evolution’s only requirement is that it works.
The only reason this works is because humans built circuits. If their behaviour was too complex, we would not be able to design them to do what we want.
Quite so, but . .
A neuron can use arbitrarily complex calculations, because evolution’s only requirement is that it works.
Ultimately this is all we care about as well.
We do simulate circuits at the lowest level now to understand functionality before we try to build it, and as our simulation capacity expands we will be able to handle increasing complex designs and move into the space of analog circuits. Digital ASICS for AGI would probably come well before that, of course.
Really its a question of a funding. Our current designs have tens of billions of industry momentum to support.
A neuron can use arbitrarily complex calculations, because evolution’s only requirement is that it works.
Ultimately this is all we care about as well.
No we have another requirement: the state of the system must separate into relevant and irrelevant variable, so that we can easily speed up the process by only relying on relevant variables. Nature does not need to work this way. It might, but we only having experience with human-made computers, so we cannot be sure how much of the information can be disregarded.
This is exactly my point. If memristors are cool but nonessential, why are they mentioned so prominently? You made it seem like they were more important than they are.
How confidant are we that this is close? What if there isn’t physically enough room to connect everything using the known methods?
Well yes, obviously not in its current state. It might not be too detailed though; we don’t know how much detail is necessary.
Using state of the art interconnect available today you’d probably be limited to something much more modest like 100-1000x max speedup. Of course I find it highly likely that interconnect will continue to improve.
It’s rather obviously too detailed from the perspective of functionality. Blue Brain is equivalent to simulating a current CPU at the molecular level. We don’t want to do that, we just want to rebuild the CPU’s algorithms in a new equivalent circuit. Massive difference.
That’s really interesting! Is there a prototype of this?
There is a difference. We know that the molecular-scale workings of a CPU don’t matter because it was designed by humans who wouldn’t be able to get the thing to work if they needed molecular precision. Evolution faces very different requirements. Intuitively, I think it is likely that some things can be optimized out, but I think it is very easy to overestimate here.
A prototype of what? The cortex has roughly 20 billion neurons organized into perhaps a million columns. The connections follow the typical inverse power law with distance and most of the connectivity is fairly local. Assuming about 5% of the connections are long distance inter-regional, an averaged firing rate of about one spike per second and efficient encoding gives on the order of one GB/s of aggregate inter-regional bandwidth. This isn’t that much. It’s a ludicrous amount of wiring when you open up the brain—all the white matter, but each connection is very slow.
So this isn’t a limiting factor for real-time simulation. The bigger limiting factor for real-time simulation is the memory bottleneck of just getting massive quantities of synaptic data onto each GPU’s local memory.
But if memristors or other techniques surmount that principle memory bandwidth limitation, the interconnect eventually becomes a limitation. A 1000x speedup would equate to 1 TB/s of aggregate interconnect bandwidth. This is still reasonable for a a few hundred computers connected via the fastest current point to point links such as 100gb ethernet (10 GB/s each roughly * 100 node to node edges).
If you are reverse engineering a circuit, you may initially simulate it at a really low level, perhaps even the molecular level, to get a good understanding of how it’s logic family and low level dynamics work. But the only point of that is to figure out what the circuit is doing. Once you figure that out you can apply those principles to build something similar.
I’d say at this point we know what the cortex does at the abstract level: hierarchical bayesian inference. We even know the specific types of computations it does to approximate this—for example see the work of Poggio’s CBCL group at MIT. They have built computational models of the visual cortex that are closing in on completeness in terms of the main computations the canonical cortical circuit can perform.
So we do know the principle functionality and underlying math of the cortex now. The network level organization above that is still less understood, but understanding the base level allows us to estimate the computational demands of creating a full cortex and it’s basically just what you’d expect (very roughly 1 low precision mad per synapse weight per update).
A computer that used the most advanced interconnects available today to be more parallel than normal computers.
The only reason this works is because humans built circuits. If their behaviour was too complex, we would not be able to design them to do what we want. A neuron can use arbitrarily complex calculations, because evolution’s only requirement is that it works.
Quite so, but . .
Ultimately this is all we care about as well.
We do simulate circuits at the lowest level now to understand functionality before we try to build it, and as our simulation capacity expands we will be able to handle increasing complex designs and move into the space of analog circuits. Digital ASICS for AGI would probably come well before that, of course.
Really its a question of a funding. Our current designs have tens of billions of industry momentum to support.
No we have another requirement: the state of the system must separate into relevant and irrelevant variable, so that we can easily speed up the process by only relying on relevant variables. Nature does not need to work this way. It might, but we only having experience with human-made computers, so we cannot be sure how much of the information can be disregarded.