It seems good for me to list my predictions here. I don’t feel very confident. I feel an overall sense of “I don’t really see why major conceptual breakthroughs are necessary.” (I agree we haven’t seen, like, an AI do something like “discover actually significant novel insights.”)
This doesn’t translate into me being confident in very short timelines, because the remaining engineering work (and “non-major” conceptual progress) might take a while, or require a commitment of resources that won’t materialize before a hype bubble pops.
But:
a) I don’t see why novel insights or agency wouldn’t eventually fall out of relatively straightforward pieces of:
“make better training sets” (and training-set generating processes)
“do RL training on a wide variety of tasks”
“find some algorithmic efficiency advances that, sure, require ‘conceptual advances’ from humans, but of a sort of straightforward kind that doesn’t seem like it requires deep genius?”
b) Even if A doesn’t work, I think “make AIs that are hyperspecialized at augmenting humans doing AI research” is pretty likely to work, and that + just a lot of money/attention generally going into the space seems to increase the likelihood of it hitting The Crucial AGI Insights (if they exist) in a brute-force-but-clever kinda way.
Assembling the kind of training sets (or, building the process that automatedly generates such sets) you’d need to do the RL seems annoyingly-hard but not genius-level hard.
I expect there to be a couple innovations that are roughly on the same level as “inventing attention” that improve efficiency a lot, but don’t require a deep understanding of intelligence.
It seems good for me to list my predictions here. I don’t feel very confident. I feel an overall sense of “I don’t really see why major conceptual breakthroughs are necessary.” (I agree we haven’t seen, like, an AI do something like “discover actually significant novel insights.”)
This doesn’t translate into me being confident in very short timelines, because the remaining engineering work (and “non-major” conceptual progress) might take a while, or require a commitment of resources that won’t materialize before a hype bubble pops.
But:
a) I don’t see why novel insights or agency wouldn’t eventually fall out of relatively straightforward pieces of:
“make better training sets” (and training-set generating processes)
“do RL training on a wide variety of tasks”
“find some algorithmic efficiency advances that, sure, require ‘conceptual advances’ from humans, but of a sort of straightforward kind that doesn’t seem like it requires deep genius?”
b) Even if A doesn’t work, I think “make AIs that are hyperspecialized at augmenting humans doing AI research” is pretty likely to work, and that + just a lot of money/attention generally going into the space seems to increase the likelihood of it hitting The Crucial AGI Insights (if they exist) in a brute-force-but-clever kinda way.
Assembling the kind of training sets (or, building the process that automatedly generates such sets) you’d need to do the RL seems annoyingly-hard but not genius-level hard.
I expect there to be a couple innovations that are roughly on the same level as “inventing attention” that improve efficiency a lot, but don’t require a deep understanding of intelligence.