I mean, I don’t think AI R&D is a particularly hard field persay, but I do think it involves lots of tricky stuff and isn’t much easier than automating some other plausibly-important-to-takeover field (e.g., robotics). (I could imagine that the AIs have a harder time automating philosophy even if they were trying to work on this, but it’s more confusing to reason about because human work on this is so dysfunctional.) The main reason I focused on AI R&D is that I think it is much more likely to be fully automated first and seems like it is probably fully automated prior to AI takeover.
Ok, I think I see what you’re saying. To check part of my understanding: when you say “AI R&D is fully automated”, I think you mean something like:
Most major AI companies have fired almost all of their SWEs. They still have staff to physically build datacenters, do business, etc.; and they have a few overseers / coordinators / strategizers of the fleet of AI R&D research gippities; but the overseers are acknowledged to basically not be doing much, and not clearly be even helping; and the overall output of the research group is “as good or better” than in 2025--measured… somehow.
I could imagine the capability occurring but not playing out that way, because the SWEs won’t necessarily be fired even after becoming useless—so it won’t be completely obvious from the outside. But this is a sociological point about when companies fire people, not a prediction about AI capabilities.
SWEs won’t necessarily be fired even after becoming useless
I’m actually surprised at how eager/willing big tech is to fire SWEs once they’re sure they won’t be economically valuable. I think a lot of priors for them being stable come from the ZIRP era. Now, these companies have quite frequent layoffs, silent layoffs, and performance firings. Companies becoming leaner will be a good litmus test for a lot of these claims.
I mean, I don’t think AI R&D is a particularly hard field persay, but I do think it involves lots of tricky stuff and isn’t much easier than automating some other plausibly-important-to-takeover field (e.g., robotics). (I could imagine that the AIs have a harder time automating philosophy even if they were trying to work on this, but it’s more confusing to reason about because human work on this is so dysfunctional.) The main reason I focused on AI R&D is that I think it is much more likely to be fully automated first and seems like it is probably fully automated prior to AI takeover.
Ok, I think I see what you’re saying. To check part of my understanding: when you say “AI R&D is fully automated”, I think you mean something like:
I could imagine the capability occurring but not playing out that way, because the SWEs won’t necessarily be fired even after becoming useless—so it won’t be completely obvious from the outside. But this is a sociological point about when companies fire people, not a prediction about AI capabilities.
I’m actually surprised at how eager/willing big tech is to fire SWEs once they’re sure they won’t be economically valuable. I think a lot of priors for them being stable come from the ZIRP era. Now, these companies have quite frequent layoffs, silent layoffs, and performance firings. Companies becoming leaner will be a good litmus test for a lot of these claims.