Hey, thanks for reading and for the thoughtful comment!
100% agree with this: “AI should be able to push at least somewhat beyond the limits of what humans have ever concluded from available data, in every field, before needing to obtain any additional, new data.”
Current methods can get us to AGI, and full AGI would result in a mind that is practically superhuman because no human mind contains all of these abilities to such a degree. I say as much in the full post: “Models may even recombine known reasoning methods to uncover new breakthroughs, but they remain bound to known human reasoning patterns.”
Also agree that simulation is a viable path to exploration / feedback beyond what humans can explicitly provide: “There are many ways we might achieve this, whether in physically embodied intelligence, complex simulations grounded in scientific constraints, or predicting real world outcomes.”
I’m mostly pointing out that at some point we will hit a bottleneck between AGI and ASI, which will require breaking free from human labels, and learning new things via exploration / real world feedback.
Got it. Then I agree with that. I’m curious if you’ve thought about where you’d put lower and upper bound estimates on capabilities before hitting that bottleneck?
That’s a good question. I don’t think I have a great idea of the lower / upper bound on capabilities from each, but I also don’t think it matters much—I suspect we’ll be doing both well before we hit AGI’s upper bound.
There’s likely plenty of “low hanging fruit” for AGI to uncover just working with human data and human labels, but I also suspect there are pretty easy ways to let AI start generating / testing hypothesis about the real world—and there are additional advantages of scale and automation to taking humans out of the loop.
Hey, thanks for reading and for the thoughtful comment!
100% agree with this: “AI should be able to push at least somewhat beyond the limits of what humans have ever concluded from available data, in every field, before needing to obtain any additional, new data.”
Current methods can get us to AGI, and full AGI would result in a mind that is practically superhuman because no human mind contains all of these abilities to such a degree. I say as much in the full post: “Models may even recombine known reasoning methods to uncover new breakthroughs, but they remain bound to known human reasoning patterns.”
Also agree that simulation is a viable path to exploration / feedback beyond what humans can explicitly provide: “There are many ways we might achieve this, whether in physically embodied intelligence, complex simulations grounded in scientific constraints, or predicting real world outcomes.”
I’m mostly pointing out that at some point we will hit a bottleneck between AGI and ASI, which will require breaking free from human labels, and learning new things via exploration / real world feedback.
Got it. Then I agree with that. I’m curious if you’ve thought about where you’d put lower and upper bound estimates on capabilities before hitting that bottleneck?
That’s a good question. I don’t think I have a great idea of the lower / upper bound on capabilities from each, but I also don’t think it matters much—I suspect we’ll be doing both well before we hit AGI’s upper bound.
There’s likely plenty of “low hanging fruit” for AGI to uncover just working with human data and human labels, but I also suspect there are pretty easy ways to let AI start generating / testing hypothesis about the real world—and there are additional advantages of scale and automation to taking humans out of the loop.