I’ve begun to doubt (1) recently, would be interested in seeing the arguments in favor of it. My model is something like “well, I’m human-level, and I sure don’t feel like I could foom if I were an AI.”
I’ve also been bothered recently by a blurring of lines between “when AGI becomes as intelligent as humans” and “when AGI starts being able to recursively self-improve.” It’s not a priori obvious that these should happen at around the same capabilities level, yet I feel like it’s common to equivocate between them.
In any case, my world model says that an AGI should actually be able to recursively self-improve before reaching human-level intelligence. Just as you mentioned, I think the relevant intuition pump is “could I FOOM if I were an AI?” Considering the ability to tinker with my own source code and make lots of copies of myself to experiment on, I feel like the answer is “yes.”
That said, I think this intuition isn’t worth much for the following reasons:
The first AGIs will probably have their capabilities distributed very differently than humans—i.e. they will probably be worse than humans at some tasks and much better at other tasks. What really matters is how good they are the task “do ML research” (or whatever paradigm we’re using to make AI’s at the time). I think there are reasons to expect them to be especially good at ML research (relative to their general level of intelligence), but also reasons to expect them to be or especially bad, and I don’t know which reasons to trust more. Note that modern narrow AIs are already have some trivial ability to “do” ML research (e.g. OpenAI’s copilot).
Part of my above story about FOOMing involves making lots of copies of myself, but will it actually be easy for the first AGI (which might not be a generally intelligent as a human) to get the resources it needs to make lots of copies? This seems like it depends on a lot of stuff which I don’t have strong expectations about, e.g. how abundant are the relevant resources, how large is the AGI, etc.
Even if you think “AGI is human-level” and “AGI is able to recursively self-improve” represent very different capabilities levels, they might happen at very similar times, depending on what else you think about takeoff speeds.
In any case, my world model says that an AGI should actually be able to recursively self-improve before reaching human-level intelligence. Just as you mentioned, I think the relevant intuition pump is “could I FOOM if I were an AI?” Considering the ability to tinker with my own source code and make lots of copies of myself to experiment on, I feel like the answer is “yes.”
Counter-anecdote: compilers have gotten ~2x better in 20 years[1], at substantially worse compile time. This is nowhere near FOOM.
Proebsting’s Law gives an 18-year doubling time. The 2001 reproduction suggested more like 20 years under optimistic assumptions, and a 2022 informal test showed a 10-15% improvement on average in the last 10 years (or a 50-year doubling time...)
1. an human-level AGI would be running on hardware making human constraints in memory or speed mostly go away by ~10 orders of magnitude
2. if you could store 10 orders of magnitude more information and read 10 orders of magnitude faster, and if you were able to copy your own code somewhere else, and the kind of AI research and code generation tools available online were good enough to have created you, wouldn’t you be able to FOOM?
The more you accelerate something, the slower and more limiting all it’s other hidden dependencies become.
So by the time we get to AGI, regular ML research will have rapidly diminishing returns (and cuda low level software or hardware optimization will also have diminishing returns), general hardware improvement will be facing the end of moore’s law, etc etc.
I don’t see why that last sentence follows from the previous sentences. In fact I don’t think it does. What if we get to AGI next year? Then returns won’t have diminished as much & there’ll be lots of overhang to exploit.
Sure - if we got to AGI next year—but for that to actually occur you’d have to exploit most of the remaining optimization slack in both high level ML and low level algorithms. Then beyond that Moore’s law is already mostly ended or nearly so depending on who you ask, and most of the easy obvious hardware arch optimizations are now behind us.
Well I would assume a “human-level AI” is an AI which performs as well as a human when it has the extra memory and running speed? I think I could FOOM eventually under those conditions but it would take a lot of thought. Being able to read the AI research that generated me would be nice but I’d ultimately need to somehow make sense of the inscrutable matrices that contain my utility function.
I’ve begun to doubt (1) recently, would be interested in seeing the arguments in favor of it. My model is something like “well, I’m human-level, and I sure don’t feel like I could foom if I were an AI.”
I’ve also been bothered recently by a blurring of lines between “when AGI becomes as intelligent as humans” and “when AGI starts being able to recursively self-improve.” It’s not a priori obvious that these should happen at around the same capabilities level, yet I feel like it’s common to equivocate between them.
In any case, my world model says that an AGI should actually be able to recursively self-improve before reaching human-level intelligence. Just as you mentioned, I think the relevant intuition pump is “could I FOOM if I were an AI?” Considering the ability to tinker with my own source code and make lots of copies of myself to experiment on, I feel like the answer is “yes.”
That said, I think this intuition isn’t worth much for the following reasons:
The first AGIs will probably have their capabilities distributed very differently than humans—i.e. they will probably be worse than humans at some tasks and much better at other tasks. What really matters is how good they are the task “do ML research” (or whatever paradigm we’re using to make AI’s at the time). I think there are reasons to expect them to be especially good at ML research (relative to their general level of intelligence), but also reasons to expect them to be or especially bad, and I don’t know which reasons to trust more. Note that modern narrow AIs are already have some trivial ability to “do” ML research (e.g. OpenAI’s copilot).
Part of my above story about FOOMing involves making lots of copies of myself, but will it actually be easy for the first AGI (which might not be a generally intelligent as a human) to get the resources it needs to make lots of copies? This seems like it depends on a lot of stuff which I don’t have strong expectations about, e.g. how abundant are the relevant resources, how large is the AGI, etc.
Even if you think “AGI is human-level” and “AGI is able to recursively self-improve” represent very different capabilities levels, they might happen at very similar times, depending on what else you think about takeoff speeds.
Counter-anecdote: compilers have gotten ~2x better in 20 years[1], at substantially worse compile time. This is nowhere near FOOM.
Proebsting’s Law gives an 18-year doubling time. The 2001 reproduction suggested more like 20 years under optimistic assumptions, and a 2022 informal test showed a 10-15% improvement on average in the last 10 years (or a 50-year doubling time...)
The straightforward argument goes like this:
1. an human-level AGI would be running on hardware making human constraints in memory or speed mostly go away by ~10 orders of magnitude
2. if you could store 10 orders of magnitude more information and read 10 orders of magnitude faster, and if you were able to copy your own code somewhere else, and the kind of AI research and code generation tools available online were good enough to have created you, wouldn’t you be able to FOOM?
No because of the generalized version of Amdhal’s law, which I explored in “Fast Minds and Slow Computers”.
The more you accelerate something, the slower and more limiting all it’s other hidden dependencies become.
So by the time we get to AGI, regular ML research will have rapidly diminishing returns (and cuda low level software or hardware optimization will also have diminishing returns), general hardware improvement will be facing the end of moore’s law, etc etc.
I don’t see why that last sentence follows from the previous sentences. In fact I don’t think it does. What if we get to AGI next year? Then returns won’t have diminished as much & there’ll be lots of overhang to exploit.
Sure - if we got to AGI next year—but for that to actually occur you’d have to exploit most of the remaining optimization slack in both high level ML and low level algorithms. Then beyond that Moore’s law is already mostly ended or nearly so depending on who you ask, and most of the easy obvious hardware arch optimizations are now behind us.
Well I would assume a “human-level AI” is an AI which performs as well as a human when it has the extra memory and running speed? I think I could FOOM eventually under those conditions but it would take a lot of thought. Being able to read the AI research that generated me would be nice but I’d ultimately need to somehow make sense of the inscrutable matrices that contain my utility function.