Many, if not most, of the large software projects I have worked on have been at least partially bottlenecked by compile time, which is the equivalent to the simulation and logic verification steps in hardware design. If I thought and wrote code much faster, this would be a speedup, but only to a saturation point where I wait for compile-test cycles.
If it takes a year of computer time to design and test the next year’s processor that would explain the exponential nature of Moore’s law.
Yes. Keep in mind this is a moving target, and that is the key relation to Moore’s Law. It would take computers from 1980 months or years to compile windows 8 or simulate a 2012 processor.
The model only makes sense if “computer time” means single threaded clock cycles.
I don’t understand how the number of threads matters. Compilers, simulators, logic verifiers, all made the parallel transition when they had to.
Moreover, he seems to say that improvements in parallelism have perfectly kept pace with the failure of increasing clock speed, so that Moore’s law has continued smoothly. This seems like too much of a coincidence to believe.
Right, it’s not a coincidence, it’s a causal relation. Moore’s Law is not a law of nature, it’s a shared business plan of the industry. When clock speed started to run out of steam, chip designers started going parallel, and software developers followed suit. You have to understand that chip designs are planned many years in advance, this wasn’t an entirely unplanned, unanticipated event.
As for the details of what kind of simulation software Intel uses, I’m not sure. Jed’s last posts are also 4 years old at this point, so much has probably changed.
I do know that Nvidia uses big expensive dedicated emulators from a company called Cadence (google “Cadence Nvidia”) and this really is a big deal for their hardware cycle.
Thus I reject Jed’s apparent claim that physics simulations are the bottleneck in Moore’s law.
Well, you seem to agree that they are some degree of bottleneck, so it may good to narrow in on what level of bottleneck, or taboo the word.
If simulations could be parallelized, why didn’t they invest in parallelism 20 years ago?
It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.
If simulations could be parallelized, why didn’t they invest in parallelism 20 years ago?
It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.
(by “parallelism” I mean making their simulations parallel, running on clusters of computers) What does “unnecessary” mean? If physical simulations were the bottleneck and they could be made faster than by parallelism, why didn’t they do it 20 years ago? They aren’t any easier to make parallel today than then. The obvious interpretation of “unnecessary” it was not necessary to use parallel simulations to keep up with Moore’s law, but that it was an option. If it was an option that would have helped then as it helps now, would it have allowed going beyond Moore’s law? You seem to be endorsing the self-fulfilling prophecy explanation of Moore’s law, which implies no bottleneck.
(by “parallelism” I mean making their simulations parallel, running on clusters of computers)
Ahhh, usually the term is distributed when referring to pure software parallelization. I know little off hand about the history of simulation and verification software, but I’d guess that there was at least a modest investment in distributed simulation even a while ago.
The consideration is cost. Spending your IT budget on one big distributed computer is often wasteful compared to each employee having their own workstation.
They sped up their simulations the right amount to minimize schedule risk (staying on moore’s law), while minimizing cost. Spending a huge amount of money to buy a bunch of computers and complex distributed simulation software just to speed up a partial bottleneck is just not worthwhile. If the typical engineer spends say 30% of his time waiting on simulation software, that limits what you should spend in order to reduce that time.
And of course the big consideration is that in a year or two moore’s law will allow you purchase new IT equipment that is twice as fast. Eventually you have to do that to keep up.
There are differing degrees of bottlenecks.
Many, if not most, of the large software projects I have worked on have been at least partially bottlenecked by compile time, which is the equivalent to the simulation and logic verification steps in hardware design. If I thought and wrote code much faster, this would be a speedup, but only to a saturation point where I wait for compile-test cycles.
Yes. Keep in mind this is a moving target, and that is the key relation to Moore’s Law. It would take computers from 1980 months or years to compile windows 8 or simulate a 2012 processor.
I don’t understand how the number of threads matters. Compilers, simulators, logic verifiers, all made the parallel transition when they had to.
Right, it’s not a coincidence, it’s a causal relation. Moore’s Law is not a law of nature, it’s a shared business plan of the industry. When clock speed started to run out of steam, chip designers started going parallel, and software developers followed suit. You have to understand that chip designs are planned many years in advance, this wasn’t an entirely unplanned, unanticipated event.
As for the details of what kind of simulation software Intel uses, I’m not sure. Jed’s last posts are also 4 years old at this point, so much has probably changed.
I do know that Nvidia uses big expensive dedicated emulators from a company called Cadence (google “Cadence Nvidia”) and this really is a big deal for their hardware cycle.
Well, you seem to agree that they are some degree of bottleneck, so it may good to narrow in on what level of bottleneck, or taboo the word.
It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.
(by “parallelism” I mean making their simulations parallel, running on clusters of computers)
What does “unnecessary” mean?
If physical simulations were the bottleneck and they could be made faster than by parallelism, why didn’t they do it 20 years ago? They aren’t any easier to make parallel today than then. The obvious interpretation of “unnecessary” it was not necessary to use parallel simulations to keep up with Moore’s law, but that it was an option. If it was an option that would have helped then as it helps now, would it have allowed going beyond Moore’s law? You seem to be endorsing the self-fulfilling prophecy explanation of Moore’s law, which implies no bottleneck.
Ahhh, usually the term is distributed when referring to pure software parallelization. I know little off hand about the history of simulation and verification software, but I’d guess that there was at least a modest investment in distributed simulation even a while ago.
The consideration is cost. Spending your IT budget on one big distributed computer is often wasteful compared to each employee having their own workstation.
They sped up their simulations the right amount to minimize schedule risk (staying on moore’s law), while minimizing cost. Spending a huge amount of money to buy a bunch of computers and complex distributed simulation software just to speed up a partial bottleneck is just not worthwhile. If the typical engineer spends say 30% of his time waiting on simulation software, that limits what you should spend in order to reduce that time.
And of course the big consideration is that in a year or two moore’s law will allow you purchase new IT equipment that is twice as fast. Eventually you have to do that to keep up.