I’ll try to summarize your point, as I understand it:
Intelligence is just one of many components. If you get huge amounts of intelligence, at that point you will be bottlenecked by something else, and even more intelligence will not help you significantly. (Company R&D doesn’t bring a “research explosion”.)
I’ll start with the analogy to company R&D.
Please note that if you use “humankind” instead of “company”, and look at historical timescale, investing into R&D of humankind actually has brought us exponential growth during the recent centuries. (Who knows, we still might colonize the universe.) So the question is, why doesn’t the same effect work for a company?
I think the answer is that R&D of even the richest companies is just a tiny fraction of the overall R&D of humankind (including historically). Even companies that do very impressive research are basically just adding the last step to a very long chain of research that happened somewhere else. As an analogy, imagine a sailor on one of Columbus’ ships, who would jump into a boat 100 meters before reaching the shore of America, row fast, and then take historical credit for technically getting to America first. From historical perspective, if the entire humanity spends millennia researching physics, and then you take a few years and invent a microwave oven, it’s the same thing. If you wouldn’t do it, someone else probably would, a few years or decades later. We have historical examples of inventions that were made independently by multiple people in the same year, but only the one who got to the patent office first gets the credit. Even today, we have several companies inventing AI in parallel, because the science and technology are already in place, and they just need to take the last step. (If the Moore’s law keeps working, a few decades later a gifted student could do the same thing on their home computer.)
So I think that the problem with a company that spends a lot on R&D is that the things they have researched today can give them an advantage today… but not tomorrow, because the world catches up. Yeah, thanks to the “intellectual property” system, the rest of the world may not be allowed to use the same invention, no matter how many people are now capable to make the same invention independently. But still, the rest of the world will invent thousands of other things, and to remain competitive, it is necessary for the company to study those other things, but they have no advantage over their competitors there.
As a thought experiment, what would it actually look like, for a company, to do 1% of humanity’s R&D? I think it would be like running a small first-world country—its economy and educational system, all in service of the company needs. But still, the rest of the world would keep going on. Also, it would be difficult to keep everything invented in your country as a company secret.
As a crazier thought experiment, imagine that the rest of the world is not going on. Imagine that a group of scientists and soldiers get teleported a few millennia to the past, they establish their own empire (of a size of a smaller country), and start doing research. The rest of the world has a religious taboo against research, but within their empire, people are brainwashed to worship science. I could imagine them taking over the world.
So I think the proper conclusion is not “intelligence is not enough to take over the world” but rather “until now, everyone’s intelligence was just a fraction of humanity’s intelligence, and also the discoveries leak to your competitors”. A company can keep its secrets, but is too small. A country is large enough, but can’t keep secret the things it teaches at public schools.
Also, please note that LLMs are just one possible paradigm of AI. Yes, currently the best one, but who knows what tomorrow may bring. I think most people among AI doomers would agree that LLMs are not the kind of AI they fear. LLMs succeed to piggyback on humanity’s written output, but they are also bottlenecked by it.
Then you have things like the chess and go playing machines, which can easily surpass humanity, but are too narrow.
The greatest danger is if someone invents an AI that it neither narrow nor bottlenecked by human output. Something that can think as efficiently as the chess machines, but about everything that humans think about.
I’ll try to summarize your point, as I understand it:
Intelligence is just one of many components. If you get huge amounts of intelligence, at that point you will be bottlenecked by something else, and even more intelligence will not help you significantly. (Company R&D doesn’t bring a “research explosion”.)
The core idea I’m trying to propose (but seem to have communicated poorly) is that the AI self-improvement feedback loop might (at some point) converge, rather than diverging. In very crude terms, suppose that GPT-8 has IQ 180, and we use ten million instances of it to design GPT-9, then perhaps we get a system with IQ 190. Then we use ten million instances of GPT-9 to design GPT-10, perhaps that has IQ 195, and eventually GPT-∞ converges at IQ 200.
I do not claim this is inevitable, merely that it seems possible, or at any rate is not ruled out by any mathematical principle. It comes down to an empirical question of how much incremental R&D effort is needed to achieve each incremental increase in AI capability.
The point about the possibility of bottlenecks other than intelligence feeds into that question about R&D effort vs. increase in capability; if we double R&D effort but are bottlenecked on, say, training data, than we might get a disappointing increase in capability.
IIUC, much of the argument you’re making here is that the existing dynamic of IP laws, employee churn, etc. puts a limit on the amount of R&D investment that any given company is willing to make, and that these incentives might soon shift in a way that could unleash a drastic increase in AI R&D spending? That seems plausible, but I don’t see how it ultimately changes the slope of the feedback loop – it merely allows for a boost up the early part of the curve?
Also, please note that LLMs are just one possible paradigm of AI. Yes, currently the best one, but who knows what tomorrow may bring. I think most people among AI doomers would agree that LLMs are not the kind of AI they fear. LLMs succeed to piggyback on humanity’s written output, but they are also bottlenecked by it.
Agreed that there’s a very good chance that AGI may not look all that much like an LLM. And so when we contemplate the outcome of recursive self-improvement, a key question will be what the R&D vs. increase-in-capability curve looks like for whatever architecture emerges.
I agree that the AI cannot improve literally forever. At some moment it will hit a limit, even if that limit is that it became near perfect already, so there is nothing to improve, or the tiny remaining improvements would not be worth their cost in resources. So, S-curve it is, in long term.
But for practical purposes, the bottom part of the S-curve looks similar to the exponential function. So if we happen to be near that bottom, it doesn’t matter that the AI will hit some fundamental limit on self-improvement around 2200 AD, if it already successfully wiped out humanity in 2045.
So the question is in which part of the S-curve we are now, and whether the AI explosion hits diminishing returns soon enough, i.e. before the things AI doomers are afraid of could happen. If it happens later, that is a small consolation.
I’ll try to summarize your point, as I understand it:
Intelligence is just one of many components. If you get huge amounts of intelligence, at that point you will be bottlenecked by something else, and even more intelligence will not help you significantly. (Company R&D doesn’t bring a “research explosion”.)
I’ll start with the analogy to company R&D.
Please note that if you use “humankind” instead of “company”, and look at historical timescale, investing into R&D of humankind actually has brought us exponential growth during the recent centuries. (Who knows, we still might colonize the universe.) So the question is, why doesn’t the same effect work for a company?
I think the answer is that R&D of even the richest companies is just a tiny fraction of the overall R&D of humankind (including historically). Even companies that do very impressive research are basically just adding the last step to a very long chain of research that happened somewhere else. As an analogy, imagine a sailor on one of Columbus’ ships, who would jump into a boat 100 meters before reaching the shore of America, row fast, and then take historical credit for technically getting to America first. From historical perspective, if the entire humanity spends millennia researching physics, and then you take a few years and invent a microwave oven, it’s the same thing. If you wouldn’t do it, someone else probably would, a few years or decades later. We have historical examples of inventions that were made independently by multiple people in the same year, but only the one who got to the patent office first gets the credit. Even today, we have several companies inventing AI in parallel, because the science and technology are already in place, and they just need to take the last step. (If the Moore’s law keeps working, a few decades later a gifted student could do the same thing on their home computer.)
So I think that the problem with a company that spends a lot on R&D is that the things they have researched today can give them an advantage today… but not tomorrow, because the world catches up. Yeah, thanks to the “intellectual property” system, the rest of the world may not be allowed to use the same invention, no matter how many people are now capable to make the same invention independently. But still, the rest of the world will invent thousands of other things, and to remain competitive, it is necessary for the company to study those other things, but they have no advantage over their competitors there.
As a thought experiment, what would it actually look like, for a company, to do 1% of humanity’s R&D? I think it would be like running a small first-world country—its economy and educational system, all in service of the company needs. But still, the rest of the world would keep going on. Also, it would be difficult to keep everything invented in your country as a company secret.
As a crazier thought experiment, imagine that the rest of the world is not going on. Imagine that a group of scientists and soldiers get teleported a few millennia to the past, they establish their own empire (of a size of a smaller country), and start doing research. The rest of the world has a religious taboo against research, but within their empire, people are brainwashed to worship science. I could imagine them taking over the world.
So I think the proper conclusion is not “intelligence is not enough to take over the world” but rather “until now, everyone’s intelligence was just a fraction of humanity’s intelligence, and also the discoveries leak to your competitors”. A company can keep its secrets, but is too small. A country is large enough, but can’t keep secret the things it teaches at public schools.
Also, please note that LLMs are just one possible paradigm of AI. Yes, currently the best one, but who knows what tomorrow may bring. I think most people among AI doomers would agree that LLMs are not the kind of AI they fear. LLMs succeed to piggyback on humanity’s written output, but they are also bottlenecked by it.
Then you have things like the chess and go playing machines, which can easily surpass humanity, but are too narrow.
The greatest danger is if someone invents an AI that it neither narrow nor bottlenecked by human output. Something that can think as efficiently as the chess machines, but about everything that humans think about.
The core idea I’m trying to propose (but seem to have communicated poorly) is that the AI self-improvement feedback loop might (at some point) converge, rather than diverging. In very crude terms, suppose that GPT-8 has IQ 180, and we use ten million instances of it to design GPT-9, then perhaps we get a system with IQ 190. Then we use ten million instances of GPT-9 to design GPT-10, perhaps that has IQ 195, and eventually GPT-∞ converges at IQ 200.
I do not claim this is inevitable, merely that it seems possible, or at any rate is not ruled out by any mathematical principle. It comes down to an empirical question of how much incremental R&D effort is needed to achieve each incremental increase in AI capability.
The point about the possibility of bottlenecks other than intelligence feeds into that question about R&D effort vs. increase in capability; if we double R&D effort but are bottlenecked on, say, training data, than we might get a disappointing increase in capability.
IIUC, much of the argument you’re making here is that the existing dynamic of IP laws, employee churn, etc. puts a limit on the amount of R&D investment that any given company is willing to make, and that these incentives might soon shift in a way that could unleash a drastic increase in AI R&D spending? That seems plausible, but I don’t see how it ultimately changes the slope of the feedback loop – it merely allows for a boost up the early part of the curve?
Agreed that there’s a very good chance that AGI may not look all that much like an LLM. And so when we contemplate the outcome of recursive self-improvement, a key question will be what the R&D vs. increase-in-capability curve looks like for whatever architecture emerges.
I agree that the AI cannot improve literally forever. At some moment it will hit a limit, even if that limit is that it became near perfect already, so there is nothing to improve, or the tiny remaining improvements would not be worth their cost in resources. So, S-curve it is, in long term.
But for practical purposes, the bottom part of the S-curve looks similar to the exponential function. So if we happen to be near that bottom, it doesn’t matter that the AI will hit some fundamental limit on self-improvement around 2200 AD, if it already successfully wiped out humanity in 2045.
So the question is in which part of the S-curve we are now, and whether the AI explosion hits diminishing returns soon enough, i.e. before the things AI doomers are afraid of could happen. If it happens later, that is a small consolation.