I’ve also been bothered recently by a blurring of lines between “when AGI becomes as intelligent as humans” and “when AGI starts being able to recursively self-improve.” It’s not a priori obvious that these should happen at around the same capabilities level, yet I feel like it’s common to equivocate between them.
In any case, my world model says that an AGI should actually be able to recursively self-improve before reaching human-level intelligence. Just as you mentioned, I think the relevant intuition pump is “could I FOOM if I were an AI?” Considering the ability to tinker with my own source code and make lots of copies of myself to experiment on, I feel like the answer is “yes.”
That said, I think this intuition isn’t worth much for the following reasons:
The first AGIs will probably have their capabilities distributed very differently than humans—i.e. they will probably be worse than humans at some tasks and much better at other tasks. What really matters is how good they are the task “do ML research” (or whatever paradigm we’re using to make AI’s at the time). I think there are reasons to expect them to be especially good at ML research (relative to their general level of intelligence), but also reasons to expect them to be or especially bad, and I don’t know which reasons to trust more. Note that modern narrow AIs are already have some trivial ability to “do” ML research (e.g. OpenAI’s copilot).
Part of my above story about FOOMing involves making lots of copies of myself, but will it actually be easy for the first AGI (which might not be a generally intelligent as a human) to get the resources it needs to make lots of copies? This seems like it depends on a lot of stuff which I don’t have strong expectations about, e.g. how abundant are the relevant resources, how large is the AGI, etc.
Even if you think “AGI is human-level” and “AGI is able to recursively self-improve” represent very different capabilities levels, they might happen at very similar times, depending on what else you think about takeoff speeds.
In any case, my world model says that an AGI should actually be able to recursively self-improve before reaching human-level intelligence. Just as you mentioned, I think the relevant intuition pump is “could I FOOM if I were an AI?” Considering the ability to tinker with my own source code and make lots of copies of myself to experiment on, I feel like the answer is “yes.”
Counter-anecdote: compilers have gotten ~2x better in 20 years[1], at substantially worse compile time. This is nowhere near FOOM.
Proebsting’s Law gives an 18-year doubling time. The 2001 reproduction suggested more like 20 years under optimistic assumptions, and a 2022 informal test showed a 10-15% improvement on average in the last 10 years (or a 50-year doubling time...)
I’ve also been bothered recently by a blurring of lines between “when AGI becomes as intelligent as humans” and “when AGI starts being able to recursively self-improve.” It’s not a priori obvious that these should happen at around the same capabilities level, yet I feel like it’s common to equivocate between them.
In any case, my world model says that an AGI should actually be able to recursively self-improve before reaching human-level intelligence. Just as you mentioned, I think the relevant intuition pump is “could I FOOM if I were an AI?” Considering the ability to tinker with my own source code and make lots of copies of myself to experiment on, I feel like the answer is “yes.”
That said, I think this intuition isn’t worth much for the following reasons:
The first AGIs will probably have their capabilities distributed very differently than humans—i.e. they will probably be worse than humans at some tasks and much better at other tasks. What really matters is how good they are the task “do ML research” (or whatever paradigm we’re using to make AI’s at the time). I think there are reasons to expect them to be especially good at ML research (relative to their general level of intelligence), but also reasons to expect them to be or especially bad, and I don’t know which reasons to trust more. Note that modern narrow AIs are already have some trivial ability to “do” ML research (e.g. OpenAI’s copilot).
Part of my above story about FOOMing involves making lots of copies of myself, but will it actually be easy for the first AGI (which might not be a generally intelligent as a human) to get the resources it needs to make lots of copies? This seems like it depends on a lot of stuff which I don’t have strong expectations about, e.g. how abundant are the relevant resources, how large is the AGI, etc.
Even if you think “AGI is human-level” and “AGI is able to recursively self-improve” represent very different capabilities levels, they might happen at very similar times, depending on what else you think about takeoff speeds.
Counter-anecdote: compilers have gotten ~2x better in 20 years[1], at substantially worse compile time. This is nowhere near FOOM.
Proebsting’s Law gives an 18-year doubling time. The 2001 reproduction suggested more like 20 years under optimistic assumptions, and a 2022 informal test showed a 10-15% improvement on average in the last 10 years (or a 50-year doubling time...)