Another large piece of what I mean is that (STEM-level) general intelligence is a very high-impact sort of thing to automate because STEM-level AGI is likely to blow human intelligence out of the water immediately, or very soon after its invention.
I don’t understand your reasoning for this conclusion. Unless I’m misunderstanding something, almost all your points in support of this thesis appear to be arguments that the upper bound of intelligence is high. But the thesis was about the rate of improvement, not the upper bound.
There are many things in the real world that have a very high upper bound but grow relatively slowly nonetheless. For example, the maximum height of a building is way higher than anything we’ve built on Earth so far, but that doesn’t imply that skyscraper heights will suddenly jump from their current heights of ~500 meters to ~50000 meters at some point. Maybe we’d expect sudden, fast growth in skyscraper heights after some crazy new material is developed, like some carbon nanotube material that’s way stronger than steel. That doesn’t seem super implausible to me, and maybe that type of thing has happened before. But notice that this is an additional assumption in the argument, not something that follows immediately from the premise that physical limits permit extremely tall buildings.
I think the best reason to think that AI intelligence could rapidly grow is that the inputs to machine intelligence could grow quickly. For instance, if the total supply of compute began growing at 2 OOMs per year (which is much faster than its current rate), then we could scale up the size of the largest AI training runs at about 2 OOMs per year, which might imply that systems would be growing in intelligence roughly quickly as the jump from GPT-3 --> GPT-4 every single year. But if the supply of compute was growing that quickly, the most likely reason is just that economic growth more generally was accelerated by AI. And that seems to me a more general scenario than the one you’ve described, without immediate implications of any local intelligence explosions.
I don’t understand your reasoning for this conclusion. Unless I’m misunderstanding something, almost all your points in support of this thesis appear to be arguments that the upper bound of intelligence is high. But the thesis was about the rate of improvement, not the upper bound.
There are many things in the real world that have a very high upper bound but grow relatively slowly nonetheless. For example, the maximum height of a building is way higher than anything we’ve built on Earth so far, but that doesn’t imply that skyscraper heights will suddenly jump from their current heights of ~500 meters to ~50000 meters at some point. Maybe we’d expect sudden, fast growth in skyscraper heights after some crazy new material is developed, like some carbon nanotube material that’s way stronger than steel. That doesn’t seem super implausible to me, and maybe that type of thing has happened before. But notice that this is an additional assumption in the argument, not something that follows immediately from the premise that physical limits permit extremely tall buildings.
I think the best reason to think that AI intelligence could rapidly grow is that the inputs to machine intelligence could grow quickly. For instance, if the total supply of compute began growing at 2 OOMs per year (which is much faster than its current rate), then we could scale up the size of the largest AI training runs at about 2 OOMs per year, which might imply that systems would be growing in intelligence roughly quickly as the jump from GPT-3 --> GPT-4 every single year. But if the supply of compute was growing that quickly, the most likely reason is just that economic growth more generally was accelerated by AI. And that seems to me a more general scenario than the one you’ve described, without immediate implications of any local intelligence explosions.