Thanks, I think I should distinguish more carefully between automating AI (safety) R&D within labs and automating the entire economy. (Johannes also asked about ability vs actual automation here but somehow your comment made it click).
It seems much more likely to me that AI R&D would actually be automated than that a bunch of random unrelated things would all actually be automated. I’d agree that if only AI R&D actually got automated, that would make takeoff pretty discontinuous in many ways. Though there are also some consequences of fast vs slow takeoff that seem to hinge more on AI or AI safety research rather than the economy as a whole.
For AI R&D, actual automation seems pretty likely to me (though I’m making a lot of this up on the spot):
It’s going to be on the easier side of things to actually automate, in part because it doesn’t require aggressive external deployment, but also because there’s no regulation (unlike for automating strictly licensed professions).
It’s the thing AI labs will have the biggest reason to automate (and would be good at automating themselves)
Training runs get more and more expensive but I’d expect the schlep needed to actually use systems to remain more constant, and at some point it’d just be worth it doing the schlep to actually use your AIs a lot (and thus be able to try way more ideas, get algorithmic improvements, and then make the giant training runs a bit more efficient).
There might also be additional reasons to get as much out of your current AI as you can instead of scaling more, namely safety concerns, regulation making scaling hard, or scaling might stop working as well. These feel less cruxy to me but combined move me a little bit.
I think these arguments mostly apply to whatever else AI labs might want to do themselves but I’m pretty unsure what that is. Like, if they have AI that could make hundreds of billions to trillions of dollars by automating a bunch of jobs, would they go for that? Or just ignore it in favor of scaling more? I don’t know, and this question is pretty cruxy for me regarding how much the economy as a whole is impacted.
It does seem to me like right now labs are spending some non-trivial effort on products, presumably for some mix of making money and getting investments, and both of those things seem like they’d still be important in the future. But maybe the case for investments will just be really obvious at some point even without further products. And overall I assume you’d have a better sense than me regarding what AI labs will want to do in the future.
Thanks, I think I should distinguish more carefully between automating AI (safety) R&D within labs and automating the entire economy. (Johannes also asked about ability vs actual automation here but somehow your comment made it click).
It seems much more likely to me that AI R&D would actually be automated than that a bunch of random unrelated things would all actually be automated. I’d agree that if only AI R&D actually got automated, that would make takeoff pretty discontinuous in many ways. Though there are also some consequences of fast vs slow takeoff that seem to hinge more on AI or AI safety research rather than the economy as a whole.
For AI R&D, actual automation seems pretty likely to me (though I’m making a lot of this up on the spot):
It’s going to be on the easier side of things to actually automate, in part because it doesn’t require aggressive external deployment, but also because there’s no regulation (unlike for automating strictly licensed professions).
It’s the thing AI labs will have the biggest reason to automate (and would be good at automating themselves)
Training runs get more and more expensive but I’d expect the schlep needed to actually use systems to remain more constant, and at some point it’d just be worth it doing the schlep to actually use your AIs a lot (and thus be able to try way more ideas, get algorithmic improvements, and then make the giant training runs a bit more efficient).
There might also be additional reasons to get as much out of your current AI as you can instead of scaling more, namely safety concerns, regulation making scaling hard, or scaling might stop working as well. These feel less cruxy to me but combined move me a little bit.
I think these arguments mostly apply to whatever else AI labs might want to do themselves but I’m pretty unsure what that is. Like, if they have AI that could make hundreds of billions to trillions of dollars by automating a bunch of jobs, would they go for that? Or just ignore it in favor of scaling more? I don’t know, and this question is pretty cruxy for me regarding how much the economy as a whole is impacted.
It does seem to me like right now labs are spending some non-trivial effort on products, presumably for some mix of making money and getting investments, and both of those things seem like they’d still be important in the future. But maybe the case for investments will just be really obvious at some point even without further products. And overall I assume you’d have a better sense than me regarding what AI labs will want to do in the future.