Briefly: with arbitrarily good methods, we could train human-level AI with very little hardware. Assertions about hardware are only relevant in the context of the relevant level of algorithmic progress.
Or: nothing depends on whether sufficient hardware for human-level AI already exists given arbitrarily good methods.
(Also note that what’s relevant for forecasting or decisionmaking is facts about how much hardware is being used and how much a lab could use if it wanted, not the global supply of hardware.)
That seems like a useful concept to me. What’s your argument it isn’t?
Briefly: with arbitrarily good methods, we could train human-level AI with very little hardware. Assertions about hardware are only relevant in the context of the relevant level of algorithmic progress.
Or: nothing depends on whether sufficient hardware for human-level AI already exists given arbitrarily good methods.
(Also note that what’s relevant for forecasting or decisionmaking is facts about how much hardware is being used and how much a lab could use if it wanted, not the global supply of hardware.)