The notion of an AI-enabled “pivotal act” seems misguided. Aligned AI systems can reduce the period of risk of an unaligned AI by advancing alignment research, convincingly demonstrating the risk posed by unaligned AI, and consuming the “free energy” that an unaligned AI might have used to grow explosively. No particular act needs to be pivotal in order to greatly reduce the risk from unaligned AI, and the search for single pivotal acts leads to unrealistic stories of the future and unrealistic pictures of what AI labs should do.
We could maybe make the world safer a little at a time, but we do have to get to a equilibrium in which the world is protected from explosive growth when some system (including a ecosystem of multiple AIs), starts pulling away from the growth-rate of the rest of the world, and gains decisive power.
My model here is something like “even small differences in the rate at which systems are compounding power and/or intelligence lead to gigantic differences in absolute power and/or intelligence, given that the world is moving so fast.”
Or maybe another way to say it: the speed at which a given system can compound it’s abilities is very fast, relative to the rate at which innovations diffuse through the economy, for other groups and other AIs to take advantage of.
It seems like all of the proposals that seem like they meet this desiderata (making the world safe from that kind of explosion in power of one system over all the others), look pretty pivotal-act like, rather than a series of marginal improvements.
My model here is something like “even small differences in the rate at which systems are compounding power and/or intelligence lead to gigantic differences in absolute power and/or intelligence, given that the world is moving so fast.”
Or maybe another way to say it: the speed at which a given system can compound it’s abilities is very fast, relative to the rate at which innovations diffuse through the economy, for other groups and other AIs to take advantage of.
I’m a bit skeptical of this. While I agree that small differences in growth rates can be very meaningful, I think it’s quite difficult to maintain a growth rate faster than the rest of the world for an extended period of time.
Growth and Trade
The reason is that: growth is way easier if you engage in trade. And assuming that gains from trade are shared evenly, the rest of the world profits just as much (in absolute terms) as you do from any trade. So you can only grow significantly faster than the rest of the world while you’re small relative to the size of the whole world.
To give a couple of illustrative examples:
The “Asian Tigers” saw their economies grow faster than GWP during the second half of the 20th century because they were engaged in “catch-up” growth. Once their GDP per capita got into the same ballpark as other developed countries, they slowed down to a similar growth rate to those countries.
Tesla has grown revenue at an average of 50% per year for 10 years. That’s been possible because they started out as a super small fraction of all car sales, and there were many orders of magnitude of growth available. I expect them to continue growing at something close to that rate for another 5-10 years, but then they’ll slow down because the global car market is only so big.
Growth without Trade
Now imagine that you’re a developing nation, or a nascent car company, and you want to try to grow your economy, or the number of cars you make, but you’re not allowed to trade with anyone else.
For a nation it sounds possible, but you’re playing on super hard mode. For a car company it sounds impossible.
Hypotheses
This suggests to me the following hypotheses:
Any entity that tries to grow without engaging in trade is going to be outcompeted by those that do trade, but
Entities that grow via trade will have their absolute growth capped at the size of the absolute growth of the rest of the world, and thus their growth rate will max out at the same rate as the rest of the world, once they’re an appreciable fraction of the global economy.
I don’t think these hypotheses are necessarily true in every case, but it seems like they would tend to be true. So to me that makes a scenario where explosive growth enables an entity to pull away from the rest of the world seem a bit less likely.
I agree that small differences in growth rates between firms or countries, compounded over many doublings of total output, will lead to large differences in final output. But I think there are quite a lot of other moving steps in this story before you get to the need for a pivotal act. It seems like you aren’t pointing to the concentration of power per se (if so, I think your remedies would look like normal boring stuff like corporate governance!), I think you are making way more opinionated claims about the risk posed by misalignment.
Most proximately, I don’t think that “modestly reduce the cost of alignment” or “modestly slow the development or deployment of unaligned AI” need to look like pivotal acts. It seems like humans can do those things a bit, and plausibly with no AI assistance can do them at >1 year per year of delay. AI assistance could help humans do those things better, improving our chances of getting over 1 year per year of delay. Modest governance changes could reduce the risk each year of catastrophe. You don’t necessarily have to delay that long in calendar time in order to get alignment solutions. etc.
We could maybe make the world safer a little at a time, but we do have to get to a equilibrium in which the world is protected from explosive growth when some system (including a ecosystem of multiple AIs), starts pulling away from the growth-rate of the rest of the world, and gains decisive power.
My model here is something like “even small differences in the rate at which systems are compounding power and/or intelligence lead to gigantic differences in absolute power and/or intelligence, given that the world is moving so fast.”
Or maybe another way to say it: the speed at which a given system can compound it’s abilities is very fast, relative to the rate at which innovations diffuse through the economy, for other groups and other AIs to take advantage of.
It seems like all of the proposals that seem like they meet this desiderata (making the world safe from that kind of explosion in power of one system over all the others), look pretty pivotal-act like, rather than a series of marginal improvements.
I’m a bit skeptical of this. While I agree that small differences in growth rates can be very meaningful, I think it’s quite difficult to maintain a growth rate faster than the rest of the world for an extended period of time.
Growth and Trade
The reason is that: growth is way easier if you engage in trade. And assuming that gains from trade are shared evenly, the rest of the world profits just as much (in absolute terms) as you do from any trade. So you can only grow significantly faster than the rest of the world while you’re small relative to the size of the whole world.
To give a couple of illustrative examples:
The “Asian Tigers” saw their economies grow faster than GWP during the second half of the 20th century because they were engaged in “catch-up” growth. Once their GDP per capita got into the same ballpark as other developed countries, they slowed down to a similar growth rate to those countries.
Tesla has grown revenue at an average of 50% per year for 10 years. That’s been possible because they started out as a super small fraction of all car sales, and there were many orders of magnitude of growth available. I expect them to continue growing at something close to that rate for another 5-10 years, but then they’ll slow down because the global car market is only so big.
Growth without Trade
Now imagine that you’re a developing nation, or a nascent car company, and you want to try to grow your economy, or the number of cars you make, but you’re not allowed to trade with anyone else.
For a nation it sounds possible, but you’re playing on super hard mode. For a car company it sounds impossible.
Hypotheses
This suggests to me the following hypotheses:
Any entity that tries to grow without engaging in trade is going to be outcompeted by those that do trade, but
Entities that grow via trade will have their absolute growth capped at the size of the absolute growth of the rest of the world, and thus their growth rate will max out at the same rate as the rest of the world, once they’re an appreciable fraction of the global economy.
I don’t think these hypotheses are necessarily true in every case, but it seems like they would tend to be true. So to me that makes a scenario where explosive growth enables an entity to pull away from the rest of the world seem a bit less likely.
I agree that small differences in growth rates between firms or countries, compounded over many doublings of total output, will lead to large differences in final output. But I think there are quite a lot of other moving steps in this story before you get to the need for a pivotal act. It seems like you aren’t pointing to the concentration of power per se (if so, I think your remedies would look like normal boring stuff like corporate governance!), I think you are making way more opinionated claims about the risk posed by misalignment.
Most proximately, I don’t think that “modestly reduce the cost of alignment” or “modestly slow the development or deployment of unaligned AI” need to look like pivotal acts. It seems like humans can do those things a bit, and plausibly with no AI assistance can do them at >1 year per year of delay. AI assistance could help humans do those things better, improving our chances of getting over 1 year per year of delay. Modest governance changes could reduce the risk each year of catastrophe. You don’t necessarily have to delay that long in calendar time in order to get alignment solutions. etc.