I think your timeline is on point regarding capabilities. However, I do not entirely follow the jump from expert-level programming and brute-force search to an “explosive feedback loop of AI progress”. You point out that there is a “clear-cut search space” in machine learning, which is true, and I agree that brute-force search could be expected to yield some progress, likely substantial progress, whereas in other scientific disciplines similar progress would be unlikely. I will even concede that explosive progress is possible, but I fail to grasp why it is likely. I think that the “clear-cut search space” is limited to low-hanging fruit, such as “different small tweaks to architectures, loss functions, optimization algorithms”, and I expect that to get from automated AI progress to automated scientific discovery something more is needed. If you’re suggesting that efficiency improvements from “different small tweaks to architectures, loss functions, optimization algorithms” would be of an order of magnitude or greater—enough to move progressively on to medium and longer horizon models—is there evidence in this post or the original report to support this that I am missing? This could plausibly lead to an “explosive feedback loop of AI progress”, but I would not assume that it will. Alternatively, it seems plausible that “directly writing learning algorithms much more sample-efficient than SGD” would be sufficient to get to automated scientific discovery—are you suggesting that “different small tweaks to architectures, loss functions, optimization algorithms” is going to be enough to generate a novel learning algorithm? The search space for that seems “less clear-cut” and much more like what would be required to automate progress in other scientific disciplines.
I think your timeline is on point regarding capabilities. However, I do not entirely follow the jump from expert-level programming and brute-force search to an “explosive feedback loop of AI progress”. You point out that there is a “clear-cut search space” in machine learning, which is true, and I agree that brute-force search could be expected to yield some progress, likely substantial progress, whereas in other scientific disciplines similar progress would be unlikely. I will even concede that explosive progress is possible, but I fail to grasp why it is likely. I think that the “clear-cut search space” is limited to low-hanging fruit, such as “different small tweaks to architectures, loss functions, optimization algorithms”, and I expect that to get from automated AI progress to automated scientific discovery something more is needed. If you’re suggesting that efficiency improvements from “different small tweaks to architectures, loss functions, optimization algorithms” would be of an order of magnitude or greater—enough to move progressively on to medium and longer horizon models—is there evidence in this post or the original report to support this that I am missing? This could plausibly lead to an “explosive feedback loop of AI progress”, but I would not assume that it will. Alternatively, it seems plausible that “directly writing learning algorithms much more sample-efficient than SGD” would be sufficient to get to automated scientific discovery—are you suggesting that “different small tweaks to architectures, loss functions, optimization algorithms” is going to be enough to generate a novel learning algorithm? The search space for that seems “less clear-cut” and much more like what would be required to automate progress in other scientific disciplines.