Jed’s point #2 is more plausible, but you are talking about point #1, which I find unbelievable for reasons that were given before he answered it. If clock speed mattered, why didn’t the failure of exponential clock speed shut down the rest of Moore’s law? If computation but not clock speed mattered, then Intel should be able to get ahead of Moore’s law by investing in software parallelism. Jed seems to endorse that position, but say that parallelism is hard. But hard exactly to the extent to allow Moore’s law to continue? Why hasn’t Intel monopolized parallelism researchers? Anyhow, I think his final conclusion is opposite to yours: he say that intelligence could lead to parallelism and getting ahead of Moore’s law.
Yes, thanks. My model of Jed’s internal model of moore’s law is similar to my own.
He said:
The short answer is that more computing power leads to more rapid progress. Probably the relationship is close to linear, and the multiplier is not small.
He then lists two examples. By ‘points’ I assume you are referring to his examples in the first comment you linked.
What exactly do you find unbelievable about his first example? He is claiming that the achievable speed of a chip is dependent on physical simulations, and thus current computing power.
If clock speed mattered, why didn’t the failure of exponential clock speed shut down the rest of Moore’s law?
Computing power is not clock speed, and Moore’s Law is not directly about clock speed nor computing power.
Jed makes a number of points in his posts. In my comment on the earlier point 1 (in this thread), I was referring to one specific point Jed made: that each new hardware generation requires complex and lengthy simulation on the current hardware generation, regardless of the amount of ‘intelligence’ one throws at the problem.
There are two questions here: would computer simulations of the physics of new chips be a bottleneck for an AI trying to foom*? and are they a bottleneck that explains Moore’s law? If you just replace humans by simulations, then the human time gets reduced with each cycle of Moore’s law, leaving the physical simulations, so the simulations probably are the bottleneck. But Intel has real-time people, so saying that it’s a bottleneck for Intel is a lot stronger a claim than saying it is a bottleneck for a foom.
First, foom: If each year of Moore’s law requires a solid month of computer time of state of the art processors, then eliminating the humans speeds it up by a factor of 12. That’s not a “hard takeoff,” but it’s pretty fast.
Moore’s Law: Jed seems to say the computational requirements of physics simulations actually determine Moore’s law and that if Intel had access to more computer resources, it could move faster. If it takes a year of computer time to design and test the next year’s processor that would explain the exponential nature of Moore’s law. But if it only takes a month, computer time probably isn’t the bottleneck. However, this model seems to predict a lot of things that aren’t true.
The model only makes sense if “computer time” means single threaded clock cycles. If simulations require an exponentially increasing number of ordered clock cycles, there’s nothing you can do but get a top of the line machine and run it continuously. You can’t buy more time. But clock speed stopped increasing exponentially, so if this is the bottleneck, Intel’s ability to design new chips should have slowed down and Moore’s law should have stopped. This didn’t happen, so the bottleneck is not linearly ordered clock cycles. So the simulation must parallelize. But if it parallelizes, Intel could just throw money at the problem. For this to be the bottleneck, Intel would have to be spending a lot of money on computer time, which I do not think is true. Jed says that writing parallel software is hard and that it isn’t Intel’s specialty. Moreover, he seems to say that improvements in parallelism have perfectly kept pace with the failure of increasing clock speed, so that Moore’s law has continued smoothly. This seems like too much of a coincidence to believe.
Thus I reject Jed’s apparent claim that physics simulations are the bottleneck in Moore’s law. If simulations could be parallelized, why didn’t they invest in parallelism 20 years ago? Maybe it’s not worth it for them to be any farther ahead of their competitors than they are. Or maybe there is some other bottleneck.
* actually, I think that an AI speeding up Moore’s law is not very relevant to anything, but it’s a simple example that many people like.
Many, if not most, of the large software projects I have worked on have been at least partially bottlenecked by compile time, which is the equivalent to the simulation and logic verification steps in hardware design. If I thought and wrote code much faster, this would be a speedup, but only to a saturation point where I wait for compile-test cycles.
If it takes a year of computer time to design and test the next year’s processor that would explain the exponential nature of Moore’s law.
Yes. Keep in mind this is a moving target, and that is the key relation to Moore’s Law. It would take computers from 1980 months or years to compile windows 8 or simulate a 2012 processor.
The model only makes sense if “computer time” means single threaded clock cycles.
I don’t understand how the number of threads matters. Compilers, simulators, logic verifiers, all made the parallel transition when they had to.
Moreover, he seems to say that improvements in parallelism have perfectly kept pace with the failure of increasing clock speed, so that Moore’s law has continued smoothly. This seems like too much of a coincidence to believe.
Right, it’s not a coincidence, it’s a causal relation. Moore’s Law is not a law of nature, it’s a shared business plan of the industry. When clock speed started to run out of steam, chip designers started going parallel, and software developers followed suit. You have to understand that chip designs are planned many years in advance, this wasn’t an entirely unplanned, unanticipated event.
As for the details of what kind of simulation software Intel uses, I’m not sure. Jed’s last posts are also 4 years old at this point, so much has probably changed.
I do know that Nvidia uses big expensive dedicated emulators from a company called Cadence (google “Cadence Nvidia”) and this really is a big deal for their hardware cycle.
Thus I reject Jed’s apparent claim that physics simulations are the bottleneck in Moore’s law.
Well, you seem to agree that they are some degree of bottleneck, so it may good to narrow in on what level of bottleneck, or taboo the word.
If simulations could be parallelized, why didn’t they invest in parallelism 20 years ago?
It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.
If simulations could be parallelized, why didn’t they invest in parallelism 20 years ago?
It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.
(by “parallelism” I mean making their simulations parallel, running on clusters of computers) What does “unnecessary” mean? If physical simulations were the bottleneck and they could be made faster than by parallelism, why didn’t they do it 20 years ago? They aren’t any easier to make parallel today than then. The obvious interpretation of “unnecessary” it was not necessary to use parallel simulations to keep up with Moore’s law, but that it was an option. If it was an option that would have helped then as it helps now, would it have allowed going beyond Moore’s law? You seem to be endorsing the self-fulfilling prophecy explanation of Moore’s law, which implies no bottleneck.
(by “parallelism” I mean making their simulations parallel, running on clusters of computers)
Ahhh, usually the term is distributed when referring to pure software parallelization. I know little off hand about the history of simulation and verification software, but I’d guess that there was at least a modest investment in distributed simulation even a while ago.
The consideration is cost. Spending your IT budget on one big distributed computer is often wasteful compared to each employee having their own workstation.
They sped up their simulations the right amount to minimize schedule risk (staying on moore’s law), while minimizing cost. Spending a huge amount of money to buy a bunch of computers and complex distributed simulation software just to speed up a partial bottleneck is just not worthwhile. If the typical engineer spends say 30% of his time waiting on simulation software, that limits what you should spend in order to reduce that time.
And of course the big consideration is that in a year or two moore’s law will allow you purchase new IT equipment that is twice as fast. Eventually you have to do that to keep up.
Maybe you’re thinking of this comment and others in that thread by Jed Harris (aka).
Jed’s point #2 is more plausible, but you are talking about point #1, which I find unbelievable for reasons that were given before he answered it. If clock speed mattered, why didn’t the failure of exponential clock speed shut down the rest of Moore’s law? If computation but not clock speed mattered, then Intel should be able to get ahead of Moore’s law by investing in software parallelism. Jed seems to endorse that position, but say that parallelism is hard. But hard exactly to the extent to allow Moore’s law to continue? Why hasn’t Intel monopolized parallelism researchers? Anyhow, I think his final conclusion is opposite to yours: he say that intelligence could lead to parallelism and getting ahead of Moore’s law.
Yes, thanks. My model of Jed’s internal model of moore’s law is similar to my own.
He said:
He then lists two examples. By ‘points’ I assume you are referring to his examples in the first comment you linked.
What exactly do you find unbelievable about his first example? He is claiming that the achievable speed of a chip is dependent on physical simulations, and thus current computing power.
Computing power is not clock speed, and Moore’s Law is not directly about clock speed nor computing power.
Jed makes a number of points in his posts. In my comment on the earlier point 1 (in this thread), I was referring to one specific point Jed made: that each new hardware generation requires complex and lengthy simulation on the current hardware generation, regardless of the amount of ‘intelligence’ one throws at the problem.
There are two questions here: would computer simulations of the physics of new chips be a bottleneck for an AI trying to foom*? and are they a bottleneck that explains Moore’s law? If you just replace humans by simulations, then the human time gets reduced with each cycle of Moore’s law, leaving the physical simulations, so the simulations probably are the bottleneck. But Intel has real-time people, so saying that it’s a bottleneck for Intel is a lot stronger a claim than saying it is a bottleneck for a foom.
First, foom:
If each year of Moore’s law requires a solid month of computer time of state of the art processors, then eliminating the humans speeds it up by a factor of 12. That’s not a “hard takeoff,” but it’s pretty fast.
Moore’s Law:
Jed seems to say the computational requirements of physics simulations actually determine Moore’s law and that if Intel had access to more computer resources, it could move faster. If it takes a year of computer time to design and test the next year’s processor that would explain the exponential nature of Moore’s law. But if it only takes a month, computer time probably isn’t the bottleneck. However, this model seems to predict a lot of things that aren’t true.
The model only makes sense if “computer time” means single threaded clock cycles. If simulations require an exponentially increasing number of ordered clock cycles, there’s nothing you can do but get a top of the line machine and run it continuously. You can’t buy more time. But clock speed stopped increasing exponentially, so if this is the bottleneck, Intel’s ability to design new chips should have slowed down and Moore’s law should have stopped. This didn’t happen, so the bottleneck is not linearly ordered clock cycles. So the simulation must parallelize. But if it parallelizes, Intel could just throw money at the problem. For this to be the bottleneck, Intel would have to be spending a lot of money on computer time, which I do not think is true. Jed says that writing parallel software is hard and that it isn’t Intel’s specialty. Moreover, he seems to say that improvements in parallelism have perfectly kept pace with the failure of increasing clock speed, so that Moore’s law has continued smoothly. This seems like too much of a coincidence to believe.
Thus I reject Jed’s apparent claim that physics simulations are the bottleneck in Moore’s law. If simulations could be parallelized, why didn’t they invest in parallelism 20 years ago? Maybe it’s not worth it for them to be any farther ahead of their competitors than they are. Or maybe there is some other bottleneck.
* actually, I think that an AI speeding up Moore’s law is not very relevant to anything, but it’s a simple example that many people like.
There are differing degrees of bottlenecks.
Many, if not most, of the large software projects I have worked on have been at least partially bottlenecked by compile time, which is the equivalent to the simulation and logic verification steps in hardware design. If I thought and wrote code much faster, this would be a speedup, but only to a saturation point where I wait for compile-test cycles.
Yes. Keep in mind this is a moving target, and that is the key relation to Moore’s Law. It would take computers from 1980 months or years to compile windows 8 or simulate a 2012 processor.
I don’t understand how the number of threads matters. Compilers, simulators, logic verifiers, all made the parallel transition when they had to.
Right, it’s not a coincidence, it’s a causal relation. Moore’s Law is not a law of nature, it’s a shared business plan of the industry. When clock speed started to run out of steam, chip designers started going parallel, and software developers followed suit. You have to understand that chip designs are planned many years in advance, this wasn’t an entirely unplanned, unanticipated event.
As for the details of what kind of simulation software Intel uses, I’m not sure. Jed’s last posts are also 4 years old at this point, so much has probably changed.
I do know that Nvidia uses big expensive dedicated emulators from a company called Cadence (google “Cadence Nvidia”) and this really is a big deal for their hardware cycle.
Well, you seem to agree that they are some degree of bottleneck, so it may good to narrow in on what level of bottleneck, or taboo the word.
It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.
(by “parallelism” I mean making their simulations parallel, running on clusters of computers)
What does “unnecessary” mean?
If physical simulations were the bottleneck and they could be made faster than by parallelism, why didn’t they do it 20 years ago? They aren’t any easier to make parallel today than then. The obvious interpretation of “unnecessary” it was not necessary to use parallel simulations to keep up with Moore’s law, but that it was an option. If it was an option that would have helped then as it helps now, would it have allowed going beyond Moore’s law? You seem to be endorsing the self-fulfilling prophecy explanation of Moore’s law, which implies no bottleneck.
Ahhh, usually the term is distributed when referring to pure software parallelization. I know little off hand about the history of simulation and verification software, but I’d guess that there was at least a modest investment in distributed simulation even a while ago.
The consideration is cost. Spending your IT budget on one big distributed computer is often wasteful compared to each employee having their own workstation.
They sped up their simulations the right amount to minimize schedule risk (staying on moore’s law), while minimizing cost. Spending a huge amount of money to buy a bunch of computers and complex distributed simulation software just to speed up a partial bottleneck is just not worthwhile. If the typical engineer spends say 30% of his time waiting on simulation software, that limits what you should spend in order to reduce that time.
And of course the big consideration is that in a year or two moore’s law will allow you purchase new IT equipment that is twice as fast. Eventually you have to do that to keep up.