Agreed, the typical machine learning paper is not AGI progress—a tiny fraction of such papers being AGI progress suffices.
Can you name some papers that you think constitute AGI progress? (Not a rhetorical question.)
I want to note that the general idea being investigated is that you can have a billion successive self-modifications with no significant statistically independent chance of critical failure. Doing proofs from axioms in which case the theorems are, not perfectly strong, but at least as strong as the axioms with conditionally independent failure probabilities not significantly lowering the conclusion strength below this as they stack, is an obvious entry point into this kind of lasting guarantee.
I’m not sure if I parse this correctly, and may be responding to something that you don’t intend to claim, but I want to remark that if the probabilities of critical failure at each stage are
0.01, 0.001, 0.0001, 0.00001, etc.
then total probability of critical failure is less than 2%. You don’t need the probability of failure at each stage to be infinitesimal, you only need the probabilities of failure to drop off fast enough.
How would they drop off if they’re “statistically independent”? In principle this could happen, given a wide separation in time, if humanity or lesser AIs somehow solve a host of problems for the self-modifier. But both the amount of help from outside and the time-frame seem implausible to me, for somewhat different reasons. (And the idea that we could know both of them well enough to have those subjective probabilities seems absurd.)
The Chinese economy was stagnant for a long time, but is now much closer to continually increasing GDP (on average) with high probability, and I expect that “goal” of increasing GDP will become progressively more stable over time.
The situation may be similar with AI, and I would expect it to be by default.
Can you name some papers that you think constitute AGI progress? (Not a rhetorical question.)
I’m not sure if I parse this correctly, and may be responding to something that you don’t intend to claim, but I want to remark that if the probabilities of critical failure at each stage are
0.01, 0.001, 0.0001, 0.00001, etc.
then total probability of critical failure is less than 2%. You don’t need the probability of failure at each stage to be infinitesimal, you only need the probabilities of failure to drop off fast enough.
How would they drop off if they’re “statistically independent”? In principle this could happen, given a wide separation in time, if humanity or lesser AIs somehow solve a host of problems for the self-modifier. But both the amount of help from outside and the time-frame seem implausible to me, for somewhat different reasons. (And the idea that we could know both of them well enough to have those subjective probabilities seems absurd.)
The Chinese economy was stagnant for a long time, but is now much closer to continually increasing GDP (on average) with high probability, and I expect that “goal” of increasing GDP will become progressively more stable over time.
The situation may be similar with AI, and I would expect it to be by default.