I like your idea. Nonetheless, it’s pretty hard to make estimates on “total available compute capacity”. If you have any points, I’d love to see them.
Somewhat connected is the idea of: What ratio of this progress/trend is due to computational power improvements versus increased spending?
To get more insights on this, we’re currently looking into computing power trends and get some insights into the development of FLOPS/$ over time.
Nonetheless, it’s pretty hard to make estimates on “total available compute capacity”. If you have any points, I’d love to see them.
I tend to fall on the side of “too many ideas”, not “too few ideas”. (The trick is sorting out which ones are actually worth the time...) A few metrics, all hilariously inaccurate:
“Total number of transistors ever produced”. (Note that this is necessarily monotonically non-decreasing.)
“Total number of transistors currently operational” (or an estimate thereof.)
“Integral of <FLOPs/transistor as a function of time, multiplied by rate of transistor production as a function of time>”
One of the above, but with DRAM & Flash (or, in general, storage) removed from # of transistors produced.
One of the above, but using FLOPs/$ and total spending (or estimate of current total market assets) as a function of time instead of using FLOPs/transistor and # transistors as a function of time.
Total BOINC capacity as a function of time (of course, doesn’t go back all that far...)
That being said, estimates of global compute capacity over time do exist, see e.g. https://ijoc.org/index.php/ijoc/article/view/1562/742 and https://ijoc.org/index.php/ijoc/article/view/1563/741. These together show (as of 2012, unfortunately, with data only extending to 2007) that total MIPS on general-purpose computers grew from ∼5∗108 MIPS in 1986 to ∼9∗1012 MIPS in 2007. (Fair warning: that’s MIPS, so (a somewhat flawed) integer benchmark not floating-point.) Or about a doubling every ~1.4-1.5 years or so.
Somewhat connected is the idea of: What ratio of this progress/trend is due to computational power improvements versus increased spending? To get more insights on this, we’re currently looking into computing power trends and get some insights into the development of FLOPS/$ over time.
As long as we’re talking about extrapolations, be aware that I’ve seen rumblings that we’re now at a plateau in that the latest generation of process nodes are actually about the same $/transistor as (or even higher than) the previous generation process nodes. I don’t know how accurate said rumblings are, however. (This is “always” the case very early in a process node; the difference here is that it’s still the case even as we’re entering volume production...)
A related metric that would be interesting is total theoretical fab output (# wafers * # transistors / wafer * fab lifetime) (or better yet, actual total fab output) divided by the cost of the fab. C.f. Rock’s law. Unfortunately, this is inherently somewhat lagging...
co-author here
I like your idea. Nonetheless, it’s pretty hard to make estimates on “total available compute capacity”. If you have any points, I’d love to see them.
Somewhat connected is the idea of: What ratio of this progress/trend is due to computational power improvements versus increased spending? To get more insights on this, we’re currently looking into computing power trends and get some insights into the development of FLOPS/$ over time.
I tend to fall on the side of “too many ideas”, not “too few ideas”. (The trick is sorting out which ones are actually worth the time...) A few metrics, all hilariously inaccurate:
“Total number of transistors ever produced”. (Note that this is necessarily monotonically non-decreasing.)
“Total number of transistors currently operational” (or an estimate thereof.)
“Integral of <FLOPs/transistor as a function of time, multiplied by rate of transistor production as a function of time>”
One of the above, but with DRAM & Flash (or, in general, storage) removed from # of transistors produced.
One of the above, but using FLOPs/$ and total spending (or estimate of current total market assets) as a function of time instead of using FLOPs/transistor and # transistors as a function of time.
Total BOINC capacity as a function of time (of course, doesn’t go back all that far...)
That being said, estimates of global compute capacity over time do exist, see e.g. https://ijoc.org/index.php/ijoc/article/view/1562/742 and https://ijoc.org/index.php/ijoc/article/view/1563/741. These together show (as of 2012, unfortunately, with data only extending to 2007) that total MIPS on general-purpose computers grew from ∼5∗108 MIPS in 1986 to ∼9∗1012 MIPS in 2007. (Fair warning: that’s MIPS, so (a somewhat flawed) integer benchmark not floating-point.) Or about a doubling every ~1.4-1.5 years or so.
As long as we’re talking about extrapolations, be aware that I’ve seen rumblings that we’re now at a plateau in that the latest generation of process nodes are actually about the same $/transistor as (or even higher than) the previous generation process nodes. I don’t know how accurate said rumblings are, however. (This is “always” the case very early in a process node; the difference here is that it’s still the case even as we’re entering volume production...)
A related metric that would be interesting is total theoretical fab output (# wafers * # transistors / wafer * fab lifetime) (or better yet, actual total fab output) divided by the cost of the fab. C.f. Rock’s law. Unfortunately, this is inherently somewhat lagging...
Thanks, appreciate the pointers!