I still think FOOM is underspecified. One of the few attempts to say what it means is here. This still seems terribly unsatisfactory.
The “at some point in the development of Artificial Intelligence” is vague—and unfalsifiable. The “capable of delivering in short time periods technological advancements that would take humans decades” is an expression of “capability”—not actual progress. Sure, some future intelligent agent will likely be very smart, and be capable of doing a lot of development quickly, if deprived of its tech tools. But that doesn’t really seem to be saying anything terribly interesting or controversial.
More specific claims seem desirable.
At the moment,some metrics (such as serial performance) are showing signs of slowing down. Pause for thought for those who forsee an exponential climb into the cloud.
There is no reason we cannot massively parallelize algorithms on silicon, it just requires more advanced computer science than most people use. Brains have a direct connect topology, silicon uses a switch fabric topology. An algorithm that parallelizes on the former may look nothing like the one that parallelizes on the later. Most computer science people never learn how to do parallelism on a switch fabric, and it is rarely taught.
Tangentially, this is why whole brain emulation on silicon is a poor way of doing things. While you can map the wetware, the algorithm implemented in the wetware probably won’t parallelize on silicon due to the fundamental topological differences.
While computer science has focused almost solely on algorithms that require a directly connected network topology to scale, there are a few organizations that know how to generally implement parallelism on switch fabrics. Most people conflate their ignorance with there being some fundamental limitation; it requires a computational model that takes the topology into account.
However, that does not address the issue of “foom”. There are other topology invariant reasons to believe it is not realistic on any kind of conventional computing substrate even if everyone was using massively parallel switch fabric algorithms.
I still think FOOM is underspecified. One of the few attempts to say what it means is here. This still seems terribly unsatisfactory.
The “at some point in the development of Artificial Intelligence” is vague—and unfalsifiable. The “capable of delivering in short time periods technological advancements that would take humans decades” is an expression of “capability”—not actual progress. Sure, some future intelligent agent will likely be very smart, and be capable of doing a lot of development quickly, if deprived of its tech tools. But that doesn’t really seem to be saying anything terribly interesting or controversial.
More specific claims seem desirable.
At the moment,some metrics (such as serial performance) are showing signs of slowing down. Pause for thought for those who forsee an exponential climb into the cloud.
There is no reason we cannot massively parallelize algorithms on silicon, it just requires more advanced computer science than most people use. Brains have a direct connect topology, silicon uses a switch fabric topology. An algorithm that parallelizes on the former may look nothing like the one that parallelizes on the later. Most computer science people never learn how to do parallelism on a switch fabric, and it is rarely taught.
Tangentially, this is why whole brain emulation on silicon is a poor way of doing things. While you can map the wetware, the algorithm implemented in the wetware probably won’t parallelize on silicon due to the fundamental topological differences.
While computer science has focused almost solely on algorithms that require a directly connected network topology to scale, there are a few organizations that know how to generally implement parallelism on switch fabrics. Most people conflate their ignorance with there being some fundamental limitation; it requires a computational model that takes the topology into account.
However, that does not address the issue of “foom”. There are other topology invariant reasons to believe it is not realistic on any kind of conventional computing substrate even if everyone was using massively parallel switch fabric algorithms.