The effect on progress is indirect and as a result hard to figure out with confidence.
We have gradually learned how to get nearly linear speedups from large numbers of cores. We can now manage linear speedups over dozens of cores for fairly structured computations, and linear speedup over hundreds of cores are possible in many cases. This is well beyond the near future number of cores per chip. For the purposes of this analysis I think we can assume that Intel can get linear speedups from increasing processors per chip, say for the next ten years.
But there are other issues.
More complicated / difficult programming models may not slow down a given program, but they make changing programs more difficult.
Over time our ability to create malleable highly parallel programs has improved. In special cases a serial program can be “automatically” parallelized (compilation with hints) but mostly parallelization still requires explicit design. But the abstractions have gotten much easier to use and revise.
(In my earlier analysis I was assuming, I think correctly, that this improvement was a function of human thought without much computational assist. The relevant experiments aren’t computationally expensive. Intel has been building massively parallel systems since the mid-80s but it didn’t produce most major improvements. The parallel programming ideas accreted slowly from a very broad community.)
So I guess I’d say that with the current software technology and trend, Intel can probably maintain most of its computational curve-riding. Certainly simulations with a known software architecture can be parallelized quite effectively, and can be maintained as requirements evolve.
The limitation will be on changes that violate the current pervasive assumptions of the simulation design. I don’t know what those are these days, and if I did I probably couldn’t say. However they reflect properties that are common to all the “processor like” chips Intel designs, over all the processes it can easily imagine.
Changes to software that involve revising pervasive assumptions have always been difficult, of course. Parallelization just increases the difficulty by some significant constant factor. Not really constant, though, it has been slowly decreasing over time as noted above.
So the types of improvement that will slow down are the ones that involve major new ways to simulate chips, or major new design approaches that don’t fit Intel’s current assumptions about chip micro-architecture or processes.
While these could be significant, unfortunately I can’t predict how or when. I can’t even come up with a list of examples where such improvement were made. They are pretty infrequent and hard to categorize.
Regarding serial vs. parallel:
The effect on progress is indirect and as a result hard to figure out with confidence.
We have gradually learned how to get nearly linear speedups from large numbers of cores. We can now manage linear speedups over dozens of cores for fairly structured computations, and linear speedup over hundreds of cores are possible in many cases. This is well beyond the near future number of cores per chip. For the purposes of this analysis I think we can assume that Intel can get linear speedups from increasing processors per chip, say for the next ten years.
But there are other issues.
More complicated / difficult programming models may not slow down a given program, but they make changing programs more difficult.
Over time our ability to create malleable highly parallel programs has improved. In special cases a serial program can be “automatically” parallelized (compilation with hints) but mostly parallelization still requires explicit design. But the abstractions have gotten much easier to use and revise.
(In my earlier analysis I was assuming, I think correctly, that this improvement was a function of human thought without much computational assist. The relevant experiments aren’t computationally expensive. Intel has been building massively parallel systems since the mid-80s but it didn’t produce most major improvements. The parallel programming ideas accreted slowly from a very broad community.)
So I guess I’d say that with the current software technology and trend, Intel can probably maintain most of its computational curve-riding. Certainly simulations with a known software architecture can be parallelized quite effectively, and can be maintained as requirements evolve.
The limitation will be on changes that violate the current pervasive assumptions of the simulation design. I don’t know what those are these days, and if I did I probably couldn’t say. However they reflect properties that are common to all the “processor like” chips Intel designs, over all the processes it can easily imagine.
Changes to software that involve revising pervasive assumptions have always been difficult, of course. Parallelization just increases the difficulty by some significant constant factor. Not really constant, though, it has been slowly decreasing over time as noted above.
So the types of improvement that will slow down are the ones that involve major new ways to simulate chips, or major new design approaches that don’t fit Intel’s current assumptions about chip micro-architecture or processes.
While these could be significant, unfortunately I can’t predict how or when. I can’t even come up with a list of examples where such improvement were made. They are pretty infrequent and hard to categorize.
I hope this helps.