There are reasons to believe that the AI/ML R&D process will be particularly inefficient due to the actors funding the work.
You haven’t argued that the balance of factors is particularly bad for AI/ML R&D compared to other sectors. Lots of R&D sectors involve multiple competing corporations being secretive about their stuff, in multiple different countries. Lots of R&D sectors involve lots of investors who have no idea what they are doing. (Indeed I’d argue that’s the norm.)
Hey Daniel, thanks for your comments here. The concern you bring up here is really important. I went back through this section with an eye toward “uniqueness.”
Looking back, I agree that the argument would be strengthened by an explicit comparison to other R&D work. My thought is still that both (1) technologies of national security and (2) especially lucrative tech have additional parallelization, since every actor has a strong incentive to chase and win it. But you’re right to point out that these factors aren’t wholly unique. I’d love to see more research on this. (I have more to look into as well. It might already be out there!)
Though one factor that I think survives this concern is that of investors.
Lots of R&D sectors involve lots of investors who have no idea what they are doing. (Indeed I’d argue that’s the norm.)
While there are always unsophisticated investors, I still think there are many more of them in a particularly lucrative hype cycle. Larger potential rewards attract more investors with no domain expertise. Plus, the signal of learning from experts also gets weaker, as the hype cycle attracts more unsophisticated authors/information sources who also want a piece of the pie. (Think about how many more people have become “AI experts” or are writing AI newsletters over the last few years than have done the same in, say, agriculture.) These factors are compounded by the overvaluation that occurs at the top of bubbles as some investors try to “get in on the wave” even when prices are too high for others.
Thanks! I agree that there’s more hype around AI right now than, say, semiconductors or batteries or solar panels. And more importantly, right around the time of AGI there’ll plausibly be more hype around AI than around anything ever. So I’ll concede that in this one way, we have reason to think that R&D resources will be allocated less efficiently than usual in the case of AGI. I don’t think this is going to significantly change the bottom line though—to the point where if it wildly changes the results in Tom’s model I’d take that as a reason to doubt Tom’s model.
You haven’t argued that the balance of factors is particularly bad for AI/ML R&D compared to other sectors. Lots of R&D sectors involve multiple competing corporations being secretive about their stuff, in multiple different countries. Lots of R&D sectors involve lots of investors who have no idea what they are doing. (Indeed I’d argue that’s the norm.)
Hey Daniel, thanks for your comments here. The concern you bring up here is really important. I went back through this section with an eye toward “uniqueness.”
Looking back, I agree that the argument would be strengthened by an explicit comparison to other R&D work. My thought is still that both (1) technologies of national security and (2) especially lucrative tech have additional parallelization, since every actor has a strong incentive to chase and win it. But you’re right to point out that these factors aren’t wholly unique. I’d love to see more research on this. (I have more to look into as well. It might already be out there!)
Though one factor that I think survives this concern is that of investors.
While there are always unsophisticated investors, I still think there are many more of them in a particularly lucrative hype cycle. Larger potential rewards attract more investors with no domain expertise. Plus, the signal of learning from experts also gets weaker, as the hype cycle attracts more unsophisticated authors/information sources who also want a piece of the pie. (Think about how many more people have become “AI experts” or are writing AI newsletters over the last few years than have done the same in, say, agriculture.) These factors are compounded by the overvaluation that occurs at the top of bubbles as some investors try to “get in on the wave” even when prices are too high for others.
Thanks! I agree that there’s more hype around AI right now than, say, semiconductors or batteries or solar panels. And more importantly, right around the time of AGI there’ll plausibly be more hype around AI than around anything ever. So I’ll concede that in this one way, we have reason to think that R&D resources will be allocated less efficiently than usual in the case of AGI. I don’t think this is going to significantly change the bottom line though—to the point where if it wildly changes the results in Tom’s model I’d take that as a reason to doubt Tom’s model.