I disagree with this update—I think the update should be “it takes a lot of schlep and time for the kinks to be worked out and for products to find market fit” rather than “the systems aren’t actually capable of this.” Like, I bet if AI progress stopped now, but people continued to make apps and widgets using fine-tunes of various GPTs, there would be OOMs more economic value being produced by AI in 2030 than today.
As a personal aside: Man, what a good world that would be. We would get a lot of the benefits of the early singularity, but not the risks.
Maybe the ideal would be one additional generation of AI progress before the great stop? And the thing that I’m saddest about is that GPTs don’t give us much leverage over biotech, so we don’t get the life-saving and quality-of-life-improving medical technologies that seem nearby on the AI tech tree.
But if we could hit some level of AI tech, stop, and just exploit / do interpretability on our current systems for 20 years, that sounds so good.
As a personal aside: Man, what a good world that would be. We would get a lot of the benefits of the early singularity, but not the risks.
Maybe the ideal would be one additional generation of AI progress before the great stop? And the thing that I’m saddest about is that GPTs don’t give us much leverage over biotech, so we don’t get the life-saving and quality-of-life-improving medical technologies that seem nearby on the AI tech tree.
But if we could hit some level of AI tech, stop, and just exploit / do interpretability on our current systems for 20 years, that sounds so good.