Huang law is marketing, but Moore’s law is also marketing. However, the fact that DGX-2 outperform the previous DGX-1 10 times in performance and 4 times in cost efficiency in deep learning applications after 1 year implies that some substance is real here.
The GPU performance graph you posted is from 2013 and is probably obsolete; anyway, I don’t claim that GPU is better than CPU in term of FLOPS. I point on the fact that GPU and later TPU (which not existed in 2013) are more capable on much simpler operations on large matrix, which, however, is what we need for AI.
AI impact article is also probably suffer from the same problem: they still count FLOPS, but we need TOPS to estimate actual performance in neural nets tasks.
Update: for example, NVIDIA in 2008 had top cards with performance around 400 gigaflops, and in 2018 - with 110 592 gigaflop in Tensor operations (wiki) which implies something like 1000 times growth in 10 years, not “order of magnitude about every 10-16 years”. (This may not add up to previous claim of 4-10 times growth, but this claim applies not to GPU, but to TPU—that is specialised ASICs to neural net calculations, which appeared only 2-3 years ago and the field grows very quickly, most visibly in form of Google’s TPU.)
Yes, both are marketing and the reality is that GPUs are improving significantly slower than Moore’s law.
The time period I looked at in the graph is 2008-2016.
I don’t see how TOPS are relevant if GPUs still have the best price performance. I expect FLOPS and TOPS to scale linearly with each other for GPUs, i.e. if FLOPS increases by 2x in some time period then TOPS also will. (I could be wrong but this is the null hypothesis)
Regarding DGX-1 and DGX-2: You can’t extrapolate a medium-term trend from 2 data points 1 year apart like that, that’s completely absurd. Because DGX-2 only has 2 times the FLOPS of DGX-1 (while being 3x as expensive), I assume the 10x improvement is due to a discontinuous improvement in tensorization (similar to TPUs) that can’t be extrapolated.
GTX 280 (a 2008 card) is 620 GFLOPS, not 400. It cost $650 on release, and the card to compare it to (2017 Titan V; I think you meant the 2018 one but it doesn’t change things significantly) costs $3000. The difference in price performance is is 110000/620*650/3000 =38x over 9 years, slower than Moore’s law. We are talking about price performance not absolute performance here, since that is what this thread is about (economic/material constraints on growth of compute).
The signal to noise ratio in numbers in your comments is so low that I’m not trusting anything you’re saying and engaging further is probably not worth it.
Thanks for participating in interesting conversation which helped me to clarify my position.
As I now see, the accelerated growth, above Moore’s law level, started only around 2016 and is related not to GPU, which grew rather slowly, but is related to specialised hardware for neural nets, like Tensor cores, Google TPU and neuromorphic chips like True North and Akida. Neuromorphic chips could give higher acceleration for NNs than Tensor cores, but not yet hit the market.
Huang law is marketing, but Moore’s law is also marketing. However, the fact that DGX-2 outperform the previous DGX-1 10 times in performance and 4 times in cost efficiency in deep learning applications after 1 year implies that some substance is real here.
The GPU performance graph you posted is from 2013 and is probably obsolete; anyway, I don’t claim that GPU is better than CPU in term of FLOPS. I point on the fact that GPU and later TPU (which not existed in 2013) are more capable on much simpler operations on large matrix, which, however, is what we need for AI.
AI impact article is also probably suffer from the same problem: they still count FLOPS, but we need TOPS to estimate actual performance in neural nets tasks.
Update: for example, NVIDIA in 2008 had top cards with performance around 400 gigaflops, and in 2018 - with 110 592 gigaflop in Tensor operations (wiki) which implies something like 1000 times growth in 10 years, not “order of magnitude about every 10-16 years”. (This may not add up to previous claim of 4-10 times growth, but this claim applies not to GPU, but to TPU—that is specialised ASICs to neural net calculations, which appeared only 2-3 years ago and the field grows very quickly, most visibly in form of Google’s TPU.)
Yes, both are marketing and the reality is that GPUs are improving significantly slower than Moore’s law.
The time period I looked at in the graph is 2008-2016.
I don’t see how TOPS are relevant if GPUs still have the best price performance. I expect FLOPS and TOPS to scale linearly with each other for GPUs, i.e. if FLOPS increases by 2x in some time period then TOPS also will. (I could be wrong but this is the null hypothesis)
Regarding DGX-1 and DGX-2: You can’t extrapolate a medium-term trend from 2 data points 1 year apart like that, that’s completely absurd. Because DGX-2 only has 2 times the FLOPS of DGX-1 (while being 3x as expensive), I assume the 10x improvement is due to a discontinuous improvement in tensorization (similar to TPUs) that can’t be extrapolated.
GTX 280 (a 2008 card) is 620 GFLOPS, not 400. It cost $650 on release, and the card to compare it to (2017 Titan V; I think you meant the 2018 one but it doesn’t change things significantly) costs $3000. The difference in price performance is is 110000/620*650/3000 =38x over 9 years, slower than Moore’s law. We are talking about price performance not absolute performance here, since that is what this thread is about (economic/material constraints on growth of compute).
The signal to noise ratio in numbers in your comments is so low that I’m not trusting anything you’re saying and engaging further is probably not worth it.
[EDIT: fixed the GTX 280 vs Titan V calculation]
Thanks for participating in interesting conversation which helped me to clarify my position.
As I now see, the accelerated growth, above Moore’s law level, started only around 2016 and is related not to GPU, which grew rather slowly, but is related to specialised hardware for neural nets, like Tensor cores, Google TPU and neuromorphic chips like True North and Akida. Neuromorphic chips could give higher acceleration for NNs than Tensor cores, but not yet hit the market.