I’m curious how good current robots are compared to where they’d need to be to automate the biggest bottlenecks in further robot production. You say we start from 10,000/year, but is it plausible that all current robots are too clumsy/incapable for many key bottleneck tasks, and that getting to 10,000 sufficiently good robots produced per year might be a long path—e.g. it would take a decade+ for humans? Or are current robots close to sufficient with good enough software?
I also imagine that even taking current robot production processes, the gap between a WW2-era car factory and a WW2-era combat airplane factory might be much smaller than the gap between a car factory and a modern frontier robotics factory, I imagine they are a big step up in complexity.
Maybe distracting technicality:
This seems to make the simplifying assumption that the R&D automation is applied to a large fraction of all the compute that was previously driving algorithmic progress right?
If we imagine that a company only owns 10% of the compute being used to drive algorithmic progress pre-automation (and is only responsible for say 30% of its own algorithmic progress, with the rest coming from other labs/academia/open-source), and this company is the only one automating their AI R&D, then the effect on overall progress might be reduced (the 15X multiplier only applies to 30% of the relevant algorithmic progress).
In practice I would guess that either the leading actor has enough of a lead that they are already responsible for most of their algorithmic progress, or other groups are close behind and will thus automate their own AI R&D around the same time anyway. But I could imagine this slowing down the impact of initial AI R&D automation a little bit (and it might make a big difference for questions like “how much would it accelerate a non-frontier lab that stole the model weights and tried to do rsi”).