Many modern AI tasks, although narrow, are comparable to narrow capacities of neural systems in the human brain. Given an empirical value for the fraction of computational resources required to perform that task with humanlike throughput on a 1 PFLOP/s machine, and an inherently uncertain and ambiguous—yet bounded—estimate of the fraction of brain resources required to perform “the equivalent” of that machine task, we can estimate the ratio of PFLOP/s machine capacity to brain capacity. What are in the author’s judgment plausible estimates for each task are consistent in suggesting that this ratio is ~10 or more. Machine learning and human learning differ in their relationship to costs, but even large machine learning costs can be amortized over an indefinitely large number of task-performing systems and application events.
In light of these considerations, we should expect that substantially superhuman computational capacity will accompany the eventual emergence of a software with broad functional competencies. On present evidence, scenarios that assume otherwise seem unlikely.
I’m not completely sure I’m understanding the first paragraph correctly.
With the bit about “this ratio is ~10 or more” it sounds like he’s saying roughly that, “When we use modern AI systems to complete tasks that humans also do, it appears to take 10+ PFLOP/s per human brain.”
(Or, since you’re not using your whole brain for a given task, maybe a better translation is, “If a task uses 10% of your brain, then a modern AI system will need to use 1+ PFLOP/s to achieve human level performance.”)
Late to the party but I’m pretty confident he’s saying the opposite—that a 1 PFLOP/s system is likely to have 10 or more times the computational capacity of the human brain, which is rather terrifying.
He gives the example of Baidu’s Deep Speech 2 which requires around 1 GFLOP/s to run and produces human-comparable results. This is 10^6 slower than the 1 PFLOP/s machine. He estimates that this process in humans take around 10^-3 of the human brain, thereby giving the estimate of a 1 PFLOP/s system being 10^3 times faster than the brain. His other examples give similar results.
Yes, though I’m fairly sure he’s talking about using trained neural networks to e.g. classify an image, which is known to be fairly cheap, rather than training them. In other words, he’s talking about using an AI service rather than creating one.
He also says that “Machine learning and human learning differ in their relationship to costs” which is also evidence for my interpretation: training is expensive, testing on one example is very cheap.
From the conclusion of that section:
I’m not completely sure I’m understanding the first paragraph correctly.
With the bit about “this ratio is ~10 or more” it sounds like he’s saying roughly that, “When we use modern AI systems to complete tasks that humans also do, it appears to take 10+ PFLOP/s per human brain.”
(Or, since you’re not using your whole brain for a given task, maybe a better translation is, “If a task uses 10% of your brain, then a modern AI system will need to use 1+ PFLOP/s to achieve human level performance.”)
Does that match other readers’ interpretations?
Late to the party but I’m pretty confident he’s saying the opposite—that a 1 PFLOP/s system is likely to have 10 or more times the computational capacity of the human brain, which is rather terrifying.
He gives the example of Baidu’s Deep Speech 2 which requires around 1 GFLOP/s to run and produces human-comparable results. This is 10^6 slower than the 1 PFLOP/s machine. He estimates that this process in humans take around 10^-3 of the human brain, thereby giving the estimate of a 1 PFLOP/s system being 10^3 times faster than the brain. His other examples give similar results.
Yes, though I’m fairly sure he’s talking about using trained neural networks to e.g. classify an image, which is known to be fairly cheap, rather than training them. In other words, he’s talking about using an AI service rather than creating one.
He also says that “Machine learning and human learning differ in their relationship to costs” which is also evidence for my interpretation: training is expensive, testing on one example is very cheap.