I suppose this kind of report is less useful to you the more you think the uncertainty lies in the compute-efficiency translation factor variable. If you think most of the orders of magnitude are in that value, you don’t care so much about the biological anchors.
And maybe you’re in that state if you think building AGI is just a matter of coming up with clever algorithms. But if you think there’s just some as-yet undiscovered general reasoning algorithm, and that’s really the only thing that matters for AGI, then why are you at all impressed by increasingly capable AI systems that use more and more compute, like AlphaGo or GPT-3? It’s (supposedly) not general reasoning, so why does it matter?
It seems to me like the compute-efficiency translation factor is just a perfectly reasonable non-mysterious quantity that we can also get information about and estimate. It’s not going to be the same factor across all tasks, but it seems like we could at least get some idea of what quantities it plausibly could take on by looking at how much compute current (and old) systems are using to match human performance for various tasks, and looking at how that number varies across tasks and changes over time.
I wouldn’t expect such analysis to leave the translation factor so uncertain that our total uncertainty is concentrated so overwhelmingly in that parameter that the biological anchors become useless.
I suppose this kind of report is less useful to you the more you think the uncertainty lies in the compute-efficiency translation factor variable. If you think most of the orders of magnitude are in that value, you don’t care so much about the biological anchors.
And maybe you’re in that state if you think building AGI is just a matter of coming up with clever algorithms. But if you think there’s just some as-yet undiscovered general reasoning algorithm, and that’s really the only thing that matters for AGI, then why are you at all impressed by increasingly capable AI systems that use more and more compute, like AlphaGo or GPT-3? It’s (supposedly) not general reasoning, so why does it matter?
It seems to me like the compute-efficiency translation factor is just a perfectly reasonable non-mysterious quantity that we can also get information about and estimate. It’s not going to be the same factor across all tasks, but it seems like we could at least get some idea of what quantities it plausibly could take on by looking at how much compute current (and old) systems are using to match human performance for various tasks, and looking at how that number varies across tasks and changes over time.
I wouldn’t expect such analysis to leave the translation factor so uncertain that our total uncertainty is concentrated so overwhelmingly in that parameter that the biological anchors become useless.