Pushing ML toward and especially past the top 0.1% of human intelligence level (IQ of 160 or something?) may require some secret sauce we have not discovered or have no clue that it would need to be discovered. Without it, we would be stuck with ML emulating humans, but not really discovering new math, physics, chemistry, CS algorithms or whatever.
For what it’s worth, I think the more likely possibility is that blowing past top-human level will require more expensive data than catching up to top-human level. Right now ML models are essentially kickstarting from human data. Rapid progress is to be expected as you catch up to the frontier, but it becomes harder once you’re already there. Ultimately I expect this effect to not slow down AI greatly, because I think at roughly the same time it becomes harder to make AI smarter, AI will also accelerate growth in data and compute inputs.
Right, the dearth of new training data might be another blocker, as discussed in various places. Whether non-redundant data growth will happen, or something like “GSTT” for self-training will be useful, who knows.
For what it’s worth, I think the more likely possibility is that blowing past top-human level will require more expensive data than catching up to top-human level. Right now ML models are essentially kickstarting from human data. Rapid progress is to be expected as you catch up to the frontier, but it becomes harder once you’re already there. Ultimately I expect this effect to not slow down AI greatly, because I think at roughly the same time it becomes harder to make AI smarter, AI will also accelerate growth in data and compute inputs.
Right, the dearth of new training data might be another blocker, as discussed in various places. Whether non-redundant data growth will happen, or something like “GSTT” for self-training will be useful, who knows.