I did work at Intel, and two years of that was in the process engineering area (running the AI lab, perhaps ironically).
The short answer is that more computing power leads to more rapid progress. Probably the relationship is close to linear, and the multiplier is not small.
Two examples:
The speed of a chip is limited by critical paths. Finding these and verifying fixes depends on physically realistic simulations (though they make simplifying assumptions, which sometimes fail). Generally the better the simulation the tighter one can cut corners. The limit on simulation quality is typically computer power available (though it can also be understanding the physics well enough to cheat correctly).
Specifically with reference to Phil Goetz’s comment about scaling, the physics is not invariant under scaling (obviously) and the critical paths change in not entirely predictable ways. So again optimal “shrinks” are hostage to simulation performance.
The second example is more exotic. Shortly before I arrived in the process world, one of the guys who ended up working for me figured out how to watch the dynamics of a chip using a scanning electron microscope, since the charges in the chip modulate the electron beam. However integrating scanning control, imaging, chip control etc. was non-trivial and he wrote a lot of the code in Lisp. Using this tool he found the source of some serious process issues that no one had been able to diagnose.
This is a special case of the general pattern that progress in making the process better and the chips faster typically depends on modeling, analyzing, collecting data, etc. in new ways, and the limits are often how quickly humans can try out and evolve computer mediated tools. Scaling to larger data sets, using less efficient but more easily modified software, running simulations faster, etc. all pay big dividends.
Intel can’t in general substitute more processors in a cluster for faster processors, since writing software that gets good speedups on large numbers of processors is hard, and changing such software is much harder than changing single-processor software. The pool of people who can do this kind of development is also small and can’t easily be increased.
So I don’t really know what difference it makes, but I think Eliezer’s specific claim here is incorrect.
I did work at Intel, and two years of that was in the process engineering area (running the AI lab, perhaps ironically).
The short answer is that more computing power leads to more rapid progress. Probably the relationship is close to linear, and the multiplier is not small.
Two examples:
The speed of a chip is limited by critical paths. Finding these and verifying fixes depends on physically realistic simulations (though they make simplifying assumptions, which sometimes fail). Generally the better the simulation the tighter one can cut corners. The limit on simulation quality is typically computer power available (though it can also be understanding the physics well enough to cheat correctly).
Specifically with reference to Phil Goetz’s comment about scaling, the physics is not invariant under scaling (obviously) and the critical paths change in not entirely predictable ways. So again optimal “shrinks” are hostage to simulation performance.
The second example is more exotic. Shortly before I arrived in the process world, one of the guys who ended up working for me figured out how to watch the dynamics of a chip using a scanning electron microscope, since the charges in the chip modulate the electron beam. However integrating scanning control, imaging, chip control etc. was non-trivial and he wrote a lot of the code in Lisp. Using this tool he found the source of some serious process issues that no one had been able to diagnose.
This is a special case of the general pattern that progress in making the process better and the chips faster typically depends on modeling, analyzing, collecting data, etc. in new ways, and the limits are often how quickly humans can try out and evolve computer mediated tools. Scaling to larger data sets, using less efficient but more easily modified software, running simulations faster, etc. all pay big dividends.
Intel can’t in general substitute more processors in a cluster for faster processors, since writing software that gets good speedups on large numbers of processors is hard, and changing such software is much harder than changing single-processor software. The pool of people who can do this kind of development is also small and can’t easily be increased.
So I don’t really know what difference it makes, but I think Eliezer’s specific claim here is incorrect.