Thanks… but wait, this is among the most impressive things you expect to see? (You know more than I do about that distribution of tasks, so you could justifiably find it more impressive than I do.)
No, sorry I was mostly focused on “such that if you didn’t see them within 3 or 5 years, you’d majorly update about time to the type of AGI that might kill everyone”. I didn’t actually pick up on “most impressive” and actually tried to focus on something that occurs substantially before things get crazy.
Most impressive would probably be stuff like “automate all of AI R&D and greatly accelerate the pace of research at AI companies”. (This seems about 35% likely to me within 5 years, so I’d update by at least that much.) But this hardly seems that interesting? I think we can agree that once the AIs are automating whole companies stuff is very near.
Ok. So I take it you’re very impressed with the difficulty of the research that is going on in AI R&D.
we can agree that once the AIs are automating whole companies stuff
(FWIW I don’t agree with that; I don’t know what companies are up to, some of them might not be doing much difficult stuff and/or the managers might not be able to or care to tell the difference.)
I mean, I don’t think AI R&D is a particularly hard field persay, but I do think it involves lots of tricky stuff and isn’t much easier than automating some other plausibly-important-to-takeover field (e.g., robotics). (I could imagine that the AIs have a harder time automating philosophy even if they were trying to work on this, but it’s more confusing to reason about because human work on this is so dysfunctional.) The main reason I focused on AI R&D is that I think it is much more likely to be fully automated first and seems like it is probably fully automated prior to AI takeover.
Ok, I think I see what you’re saying. To check part of my understanding: when you say “AI R&D is fully automated”, I think you mean something like:
Most major AI companies have fired almost all of their SWEs. They still have staff to physically build datacenters, do business, etc.; and they have a few overseers / coordinators / strategizers of the fleet of AI R&D research gippities; but the overseers are acknowledged to basically not be doing much, and not clearly be even helping; and the overall output of the research group is “as good or better” than in 2025--measured… somehow.
I could imagine the capability occurring but not playing out that way, because the SWEs won’t necessarily be fired even after becoming useless—so it won’t be completely obvious from the outside. But this is a sociological point about when companies fire people, not a prediction about AI capabilities.
SWEs won’t necessarily be fired even after becoming useless
I’m actually surprised at how eager/willing big tech is to fire SWEs once they’re sure they won’t be economically valuable. I think a lot of priors for them being stable come from the ZIRP era. Now, these companies have quite frequent layoffs, silent layoffs, and performance firings. Companies becoming leaner will be a good litmus test for a lot of these claims.
Thanks… but wait, this is among the most impressive things you expect to see? (You know more than I do about that distribution of tasks, so you could justifiably find it more impressive than I do.)
No, sorry I was mostly focused on “such that if you didn’t see them within 3 or 5 years, you’d majorly update about time to the type of AGI that might kill everyone”. I didn’t actually pick up on “most impressive” and actually tried to focus on something that occurs substantially before things get crazy.
Most impressive would probably be stuff like “automate all of AI R&D and greatly accelerate the pace of research at AI companies”. (This seems about 35% likely to me within 5 years, so I’d update by at least that much.) But this hardly seems that interesting? I think we can agree that once the AIs are automating whole companies stuff is very near.
Ok. So I take it you’re very impressed with the difficulty of the research that is going on in AI R&D.
(FWIW I don’t agree with that; I don’t know what companies are up to, some of them might not be doing much difficult stuff and/or the managers might not be able to or care to tell the difference.)
I mean, I don’t think AI R&D is a particularly hard field persay, but I do think it involves lots of tricky stuff and isn’t much easier than automating some other plausibly-important-to-takeover field (e.g., robotics). (I could imagine that the AIs have a harder time automating philosophy even if they were trying to work on this, but it’s more confusing to reason about because human work on this is so dysfunctional.) The main reason I focused on AI R&D is that I think it is much more likely to be fully automated first and seems like it is probably fully automated prior to AI takeover.
Ok, I think I see what you’re saying. To check part of my understanding: when you say “AI R&D is fully automated”, I think you mean something like:
I could imagine the capability occurring but not playing out that way, because the SWEs won’t necessarily be fired even after becoming useless—so it won’t be completely obvious from the outside. But this is a sociological point about when companies fire people, not a prediction about AI capabilities.
I’m actually surprised at how eager/willing big tech is to fire SWEs once they’re sure they won’t be economically valuable. I think a lot of priors for them being stable come from the ZIRP era. Now, these companies have quite frequent layoffs, silent layoffs, and performance firings. Companies becoming leaner will be a good litmus test for a lot of these claims.