I do think that something like dumb scaling can mostly just work
The exact degree of “mostly” is load-bearing here. You’d mentioned provisions for error-correction before. But are the necessary provisions something simple, such that the most blatantly obvious wrappers/prompt-engineering works, or do we need to derive some additional nontrivial theoretical insights to correctly implement them?
Last I checked, AutoGPT-like stuff has mostly failed, so I’m inclined to think it’s closer to the latter.
I am unconvinced that “the” reliability issue is a single issue that will be solved by a single insight, rather than AIs lacking procedural knowledge of how to handle a bunch of finicky special cases that will be solved by online learning or very long context windows once hardware costs decrease enough to make one of those approaches financially viable.
Yeah, I’m sympathetic to this argument that there won’t be a single insight, and that at least one approach will work out once hardware costs decrease enough, and I agree less with Thane Ruthenis’s intuitions here than I did before.
If I were to think about it a little, I’d suspect the big difference that LLMs and humans have is state/memory, where humans do have state/memory, but LLMs are currently more or less stateless today, and RNN training has not been solved to the extent transformers were.
One thing I will also say is that AI winters will be shorter than previous AI winters, because AI products can now be sort of made profitable, and this gives an independent base of money for AI research in ways that weren’t possible pre-2016.
A factor stemming from the same cause but pushing in the opposite direction is that “mundane” AI profitability can “distract” people who would otherwise be AGI hawks.
The exact degree of “mostly” is load-bearing here. You’d mentioned provisions for error-correction before. But are the necessary provisions something simple, such that the most blatantly obvious wrappers/prompt-engineering works, or do we need to derive some additional nontrivial theoretical insights to correctly implement them?
Last I checked, AutoGPT-like stuff has mostly failed, so I’m inclined to think it’s closer to the latter.
Actually, I’ve changed my mind, in that the reliability issue probably does need at least non-trivial theoretical insights to make AIs work.
I am unconvinced that “the” reliability issue is a single issue that will be solved by a single insight, rather than AIs lacking procedural knowledge of how to handle a bunch of finicky special cases that will be solved by online learning or very long context windows once hardware costs decrease enough to make one of those approaches financially viable.
Yeah, I’m sympathetic to this argument that there won’t be a single insight, and that at least one approach will work out once hardware costs decrease enough, and I agree less with Thane Ruthenis’s intuitions here than I did before.
If I were to think about it a little, I’d suspect the big difference that LLMs and humans have is state/memory, where humans do have state/memory, but LLMs are currently more or less stateless today, and RNN training has not been solved to the extent transformers were.
One thing I will also say is that AI winters will be shorter than previous AI winters, because AI products can now be sort of made profitable, and this gives an independent base of money for AI research in ways that weren’t possible pre-2016.
A factor stemming from the same cause but pushing in the opposite direction is that “mundane” AI profitability can “distract” people who would otherwise be AGI hawks.