I’ll try to estimate as requested, but substituting fixed computing power for “riding the curve” (as Intel does now) is a bit of an apples to fruit cocktail comparison, so I’m not sure how useful it is. A more direct comparison would be with always having a computing infrastructure from 10 years in the future or past.
Even with this amendment, the (necessary) changes to design, test, and debugging processes make this hard to answer...
I’ll think out loud a bit.
Here’s the first quick guess I can make that I’m moderately sure of: The length of time to go through a design cycle (including shrinks and transitions to new processes) would scale pretty closely with computing power, keeping the other constraints pretty much constant. (Same designers, same number of bugs acceptable, etc.) So if we assume the power follows Moore’s law (probably too simple as others have pointed out) cycles would run hundreds of times faster with computing power from 10 years in the future.
This more or less fits the reality, in that design cycles have stayed about the same length while chips have gotten hundreds of times more complex, and also much faster, both of which soak up computing power.
Probably more computing power would have also allowed faster process evolution (basically meaning smaller feature sizes) but I was never a process designer so I can’t really generate a firm opinion on that. A lot of physical experimentation is required and much of that wouldn’t go faster. So I’m going to assume very conservatively that the increased or decreased computing power would have no effect on process development.
The number of transistors on a chip is limited by process considerations, so adding computing power doesn’t directly enable more complex chips. Leaving the number of devices the same and just cycling the design of chips with more or less the same architecture hundreds of times faster doesn’t make much economic sense. Maybe instead Intel would create hundreds of times as many chip designs, but that implies a completely different corporate strategy so I won’t pursue that.
In this scenario, experimentation via computing gets hundreds of times “cheaper” than in our world, so it would get used much more heavily. Given these cheap experiments, I’d guess Intel would have adopted much more radical designs.
Examples of more radical approaches would be self-clocked chips, much more internal parallelism (right now only about 1⁄10 of the devices change state on any clock), chips that directly use more of the quantum properties of the material, chips that work with values other than 0 and 1, direct use of probabilistic computing, etc. In other words, designers would have pushed much further out into the micro-architectural design space, to squeeze more function out of the devices. Some of this (e.g. probabilistic or quantum-enhanced computing) could propagate up to the instruction set level.
(This kind of weird design is exactly what we get when evolutionary search is applied directly to a gate array, which roughly approximates the situation Intel would be in.)
Conversely, if Intel had hundreds of times less computing power, they’d have to be extremely conservative. Designs would have to stay further from any possible timing bugs, new designs would appear much more slowly, they’d probably make the transition to multiple cores much sooner because scaling processor designs to large numbers of transistors would be intractable, there’s be less fine grained internal parallelism, etc.
If we assumed that progress in process design was also more or less proportional to computing power available, then in effect we’d just be changing the exponent on the curve; to a first approximation we could assume no qualitative changes in design. However as I say this is a very big “if”.
Now however we have to contend with an interesting feedback issue. Suppose we start importing computing from ten years in the future in the mid-1980s. If it speeds everything up proportionally, the curve gets a lot steeper, because that future is getting faster faster than ours. Conversely if Intel had to run on ten year old technology the curve would be a lot flatter.
On the other hand if there is skew between different aspects of the development process (as above with chip design vs. process design) we could go somewhere else entirely. For example if Intel develops some way to use quantum effects in 2000 due to faster simulations from 1985 on, and then that gets imported (in a black box) back to 1990, things could get pretty crazy.
I think that’s all for now. Maybe I’ll have more later. Further questions welcome.
I’ll try to estimate as requested, but substituting fixed computing power for “riding the curve” (as Intel does now) is a bit of an apples to fruit cocktail comparison, so I’m not sure how useful it is. A more direct comparison would be with always having a computing infrastructure from 10 years in the future or past.
Even with this amendment, the (necessary) changes to design, test, and debugging processes make this hard to answer...
I’ll think out loud a bit.
Here’s the first quick guess I can make that I’m moderately sure of: The length of time to go through a design cycle (including shrinks and transitions to new processes) would scale pretty closely with computing power, keeping the other constraints pretty much constant. (Same designers, same number of bugs acceptable, etc.) So if we assume the power follows Moore’s law (probably too simple as others have pointed out) cycles would run hundreds of times faster with computing power from 10 years in the future.
This more or less fits the reality, in that design cycles have stayed about the same length while chips have gotten hundreds of times more complex, and also much faster, both of which soak up computing power.
Probably more computing power would have also allowed faster process evolution (basically meaning smaller feature sizes) but I was never a process designer so I can’t really generate a firm opinion on that. A lot of physical experimentation is required and much of that wouldn’t go faster. So I’m going to assume very conservatively that the increased or decreased computing power would have no effect on process development.
The number of transistors on a chip is limited by process considerations, so adding computing power doesn’t directly enable more complex chips. Leaving the number of devices the same and just cycling the design of chips with more or less the same architecture hundreds of times faster doesn’t make much economic sense. Maybe instead Intel would create hundreds of times as many chip designs, but that implies a completely different corporate strategy so I won’t pursue that.
In this scenario, experimentation via computing gets hundreds of times “cheaper” than in our world, so it would get used much more heavily. Given these cheap experiments, I’d guess Intel would have adopted much more radical designs.
Examples of more radical approaches would be self-clocked chips, much more internal parallelism (right now only about 1⁄10 of the devices change state on any clock), chips that directly use more of the quantum properties of the material, chips that work with values other than 0 and 1, direct use of probabilistic computing, etc. In other words, designers would have pushed much further out into the micro-architectural design space, to squeeze more function out of the devices. Some of this (e.g. probabilistic or quantum-enhanced computing) could propagate up to the instruction set level.
(This kind of weird design is exactly what we get when evolutionary search is applied directly to a gate array, which roughly approximates the situation Intel would be in.)
Conversely, if Intel had hundreds of times less computing power, they’d have to be extremely conservative. Designs would have to stay further from any possible timing bugs, new designs would appear much more slowly, they’d probably make the transition to multiple cores much sooner because scaling processor designs to large numbers of transistors would be intractable, there’s be less fine grained internal parallelism, etc.
If we assumed that progress in process design was also more or less proportional to computing power available, then in effect we’d just be changing the exponent on the curve; to a first approximation we could assume no qualitative changes in design. However as I say this is a very big “if”.
Now however we have to contend with an interesting feedback issue. Suppose we start importing computing from ten years in the future in the mid-1980s. If it speeds everything up proportionally, the curve gets a lot steeper, because that future is getting faster faster than ours. Conversely if Intel had to run on ten year old technology the curve would be a lot flatter.
On the other hand if there is skew between different aspects of the development process (as above with chip design vs. process design) we could go somewhere else entirely. For example if Intel develops some way to use quantum effects in 2000 due to faster simulations from 1985 on, and then that gets imported (in a black box) back to 1990, things could get pretty crazy.
I think that’s all for now. Maybe I’ll have more later. Further questions welcome.