This seems non-reponsive to arguments already in my post:
If we institute a pause, we should expect to see (counterfactually) reduced R&D investment in improving hardware capabilities, reduced investment in scaling hardware production, reduced hardware production, reduced investment in research, reduced investment in supporting infrastructure, and fewer people entering the field.
If we institute a pause, we should expect to see (counterfactually) reduced R&D investment in improving hardware capabilities, reduced investment in scaling hardware production, reduced hardware production, reduced investment in research, reduced investment in supporting infrastructure, and fewer people entering the field.
This seems an extreme claim to me (if these effects are argued to be meaningful), especially “fewer people entering the field”! Just how long do you think you would need a pause to make fewer people enter the field? I would expect that not only would the pause have to have lasted say 5+ years but there would have to be a worldwide expectation that it would go on for longer to actually put people off.
Because of flow on effects and existing commitments, reduced hardware R&D investment wouldn’t start for a few years either. Its not clear that it will meaningfully happen at all if we want to deploy existing LLM everywhere also. For example in robotics I expect there will be substantial demand for hardware even without AI advances as our current capabilities havn’t been deployed there yet.
As I have said here, and probably in other places, I am quite a bit more in favor of directly going for a hardware pause specifically for the most advanced hardware. I think it is achievable, impactful, and with clearer positive consequences (and not unintended negative ones) than targeting training runs of an architecture that already seems to be showing diminishing returns.
If you must go for after FLOPS for training, then build in large factors of safety for architectures/systems that are substantially different from what is currently done. I am not worried about unlimited FLOPS on GPT-X but could be for >100* less on something that clearly looks like it has very different scaling laws.
This seems non-reponsive to arguments already in my post:
If you are referring to this:
This seems an extreme claim to me (if these effects are argued to be meaningful), especially “fewer people entering the field”! Just how long do you think you would need a pause to make fewer people enter the field? I would expect that not only would the pause have to have lasted say 5+ years but there would have to be a worldwide expectation that it would go on for longer to actually put people off.
Because of flow on effects and existing commitments, reduced hardware R&D investment wouldn’t start for a few years either. Its not clear that it will meaningfully happen at all if we want to deploy existing LLM everywhere also. For example in robotics I expect there will be substantial demand for hardware even without AI advances as our current capabilities havn’t been deployed there yet.
As I have said here, and probably in other places, I am quite a bit more in favor of directly going for a hardware pause specifically for the most advanced hardware. I think it is achievable, impactful, and with clearer positive consequences (and not unintended negative ones) than targeting training runs of an architecture that already seems to be showing diminishing returns.
If you must go for after FLOPS for training, then build in large factors of safety for architectures/systems that are substantially different from what is currently done. I am not worried about unlimited FLOPS on GPT-X but could be for >100* less on something that clearly looks like it has very different scaling laws.