Depends on what organizational principles you are talking about. At the very generic level, the brain’s extremely distributed architecture is already the direction computers are steadily moving towards out of necessity (with supercomputers farther ahead).
As for memristors, they will probably have many generic uses. They just also happen to be very powerful for cortex-like AGI applications.
The cortical model for an analog AGI circuit would be incredibly fast but it would be specific for AGI applications (which of course is still quite broad in scope). For regular computation you’d still use digital programmable chips.
Have the distributed architecture trends and memristor applications followed the rough path you expected when you wrote this 12 years ago? Is this or this the sort of thing you were gesturing at? Do you have other links or keywords I could search for?
The distributed arch prediction with supercomputers farther ahead was correct—nvidia grew from a niche gaming company to eclipse intel and is on some road to stock market dominance all because it puts old parallel supercomputers on single chips.
Neuromorphic computing in various forms are slowly making progress: there’s IBM’s truenorth research chip for example, and a few others. Memristors were overhyped and crashed, but are still in research and may yet come to be.
So instead we got big GPU clusters, which for the reasons explained in the article can’t run large brain like RNNs at high speeds, but they can run smaller transformer-models (which sacrifice recurrence and thus aren’t as universal, but are still pretty general) at very high speeds (perhaps 10000x) - and that is what gave us GPT4. The other main limitation of transformers vs brain-like RNNs is GPUs only massive accelerate transformer training, not inference. Some combination of those two limitations seems to be the main blockers for AGI at current training compute regime, but probably won’t last long.
This story did largely get one aspect of AGI correct and for the right reasons—that its early large economic advantage will be in text generation and related fields, and perhaps the greatest early risk is via human influence.
Depends on what organizational principles you are talking about. At the very generic level, the brain’s extremely distributed architecture is already the direction computers are steadily moving towards out of necessity (with supercomputers farther ahead).
As for memristors, they will probably have many generic uses. They just also happen to be very powerful for cortex-like AGI applications.
The cortical model for an analog AGI circuit would be incredibly fast but it would be specific for AGI applications (which of course is still quite broad in scope). For regular computation you’d still use digital programmable chips.
Have the distributed architecture trends and memristor applications followed the rough path you expected when you wrote this 12 years ago? Is this or this the sort of thing you were gesturing at? Do you have other links or keywords I could search for?
The distributed arch prediction with supercomputers farther ahead was correct—nvidia grew from a niche gaming company to eclipse intel and is on some road to stock market dominance all because it puts old parallel supercomputers on single chips.
Neuromorphic computing in various forms are slowly making progress: there’s IBM’s truenorth research chip for example, and a few others. Memristors were overhyped and crashed, but are still in research and may yet come to be.
So instead we got big GPU clusters, which for the reasons explained in the article can’t run large brain like RNNs at high speeds, but they can run smaller transformer-models (which sacrifice recurrence and thus aren’t as universal, but are still pretty general) at very high speeds (perhaps 10000x) - and that is what gave us GPT4. The other main limitation of transformers vs brain-like RNNs is GPUs only massive accelerate transformer training, not inference. Some combination of those two limitations seems to be the main blockers for AGI at current training compute regime, but probably won’t last long.
This story did largely get one aspect of AGI correct and for the right reasons—that its early large economic advantage will be in text generation and related fields, and perhaps the greatest early risk is via human influence.