Economies of scale would normally mean that companies would keep growing larger and larger. With human employees, the coordination costs grow superlinearly, which ends up limiting the size to which a company can grow. However, with the advent of AGI, many of these coordination costs will be removed. If we can align AGIs to particular humans, then a corporation run by AGIs aligned to a single human would at least avoid principal-agent costs. As a result, the economies of scale would dominate, and companies would grow much larger, leading to more centralization.
Planned opinion:
This argument is quite compelling to me under the assumption of human-level AGI systems that can be intent-aligned. Note though that while the development of AGI systems removes principal-agent problems, it doesn’t remove issues that arise due to information asymmetry.
It does seem like this doesn’t hold with something like CAIS, where each AI service is optimized for a particular task, since there likely will be principal-agent problems between services.
It seems like the argument should mainly make us more worried about stable authoritarian regimes: the main effect based on this argument is a centralization of power in the hands of the AGI’s overseers. This won’t happen with companies, because we already have institutions that prevent companies from gaining too much power, and there doesn’t seem to be a strong reason to expect that to stop. It could happen with government, but if long-term governmental power still rests with the people via democracy, that seems okay. So the risky situation seems to be when the government gains power, and the people no longer have effective control over government. (This would include scenarios with e.g. a government that has sufficiently good AI-fueled propaganda that they always win elections, regardless of whether their governing is actually good.)
Thanks, I appreciate the chance to clarify/discuss before your newsletter goes out!
Note though that while the development of AGI systems removes principal-agent problems, it doesn’t remove issues that arise due to information asymmetry.
By “asymmetric information” I was referring to some specific ideas in economics, the short version of which is that if two people have different values, they often can’t just ask each other to honestly tell them their private information. For example, part of the reason why monopolies are inefficient is that they can’t do perfect price discrimination. If a monopolist knew exactly how much each potential buyer values their product, they could just charge that buyer one penny less than that value (plus the buyer’s transaction cost) and the buyer would still make the purchase. There would be no deadweight loss to the economy because all positive-value transactions would go through. But because the buyer won’t tell them how much they really value the product (if the monopolist charges one penny less than whatever they say, they’d just all give a low value), the monopolist ends up trying to maximize profit by charging a price that causes some potentially valuable trades to not go through.
So principal-agent problems are a subset of asymmetric information problems, and I think AGI would solve such problems because copies of a single AGI all have the same values (or are at least much closer to this than the situation with different humans) so they can just ask each other to honestly tell them whatever information they need.
ETA: By “information asymmetry” did you mean something more like the fact that different copies or parts of an AGI can have access to different information and it can be costly (on a technical level) to propagate that information across the whole AGI? If so that seems like a much smaller cost than the kind of cost from “asymmetric information” that I’m talking about. Also it seems like it would be good to use a different phrase to talk about what you mean, so people don’t confuse it with the concept of “asymmetric information” in economics. (I’ll also add a link to that phrase in the post to clarify what I’m referring to.)
It does seem like this doesn’t hold with something like CAIS, where each AI service is optimized for a particular task, since there likely will be principal-agent problems between services.
This seems right to me, so I think contra Drexler, this is another reason to expect a strong competitive pressure to move from CAIS to AGI.
This won’t happen with companies, because we already have institutions that prevent companies from gaining too much power, and there doesn’t seem to be a strong reason to expect that to stop.
Today, when companies get too big, they often become monopolists, and that (plus their huge internal coordination costs) tend to make the overall efficiency of the economy worse, so competitive pressures between countries force institutions into existence to limit the size of companies. With AGI-operated companies, these problems become smaller because monopolies in different industries can merge without being limited by internal coordination costs and these merged companies can internally charge each other efficient prices. In the limit of a single AGI controlling the whole economy, all such inefficiencies go away. So while institutions that prevent companies from gaining too much power will perhaps persist for a while due to inertia (but even that’s unclear, as Raemon suggests), that probably won’t last very long when selection pressure for such institutions switches direction.
ETA: On second thought, part of the reason for such institutions to exist must also be domestic political pressure (from people who are afraid of too much concentration of power), so at least that pressure would persist in countries where such pressure exists or has much force in the first place.
ETA: By “information asymmetry” did you mean something more like the fact that different copies or parts of an AGI can have access to different information and it can be costly (on a technical level) to propagate that information across the whole AGI? If so that seems like a much smaller cost than the kind of cost from “asymmetric information” that I’m talking about. Also it seems like it would be good to use a different phrase to talk about what you mean, so people don’t confuse it with the concept of “asymmetric information” in economics.
Yes, that’s right, though I can see how it’s confusing based on the economics literature. Any suggestions for an alternative phrase? I was considering “communication costs”, but there could also be costs from the fact that different parts have different competencies.
It’s not clear to me that principal-agent costs are more important than the ones I’m talking about here. My experience of working in large companies is not that I was misaligned with the company, it was that the company’s “plan” (to the extent that one existed) was extremely large and complex and not something I could easily understand. It could be that this is actually the most efficient way to work even with intent-aligned agents, since communicating the full plan could involve very large communication costs.
(I agree that the Moral Mazes arguments are primarily about principal-agent problems, but I don’t know how much to believe Moral Mazes.)
This seems right to me, so I think contra Drexler, this is another reason to expect a strong competitive pressure to move from CAIS to AGI.
Seems reasonable, though I don’t think it is arguing against the main arguments in favor of CAIS (which to me are that CAIS seems more technically feasible than AGI).
With AGI-operated companies, these problems become smaller because monopolies in different industries can merge without being limited by internal coordination costs and these merged companies can internally charge each other efficient prices.
I don’t see how this suggests that our existing institutions to prevent centralization of power will go away, since even now monopolies could merge, often want to merge, but are prevented by law from doing so. (Though I’m not very confident in this claim, I’m mostly parroting back things I’ve heard.)
In the limit of a single AGI controlling the whole economy, all such inefficiencies go away.
Right, but that requires government buy-in, which is exactly my model of risk in the opinion I wrote.
On second thought, part of the reason for such institutions to exist must also be domestic political pressure (from people who are afraid of too much concentration of power), so at least that pressure would persist in countries where such pressure exists or has much force in the first place.
Yeah, that’s my primary model here. I’d be surprised but not shocked if competition between countries explained most of the effect.
Yeah, that’s my primary model here. I’d be surprised but not shocked if competition between countries explained most of the effect.
It seems worth noting here that when it looked for a while like the planned economy of the Soviet Union might outperform western free market economies (and even before that, when many intellectuals just thought based on theory that central planning would perform better *), there were a lot of people in the west who supported switching to socialism / central planning. Direct military competition (which Carl’s paper focuses on more) would make this pressure even stronger. So if one country switches to the “one AGI controls everything” model (either deliberately or due to weak/absent existing institutions that work against centralization), it seems hard for other countries to hold out in the long run.
Does that seem right to you, or do you see things turn out a different way (in the long run)?
(* I realize this is also a cautionary tale about using theory to predict the future, like I’m trying to do now.)
Does that seem right to you, or do you see things turn out a different way (in the long run)?
I agree that direct military competition would create such a pressure.
I’m not sure that absent that there actually is competition between countries—what are they even competing on? You’re reasoning as though they compete on economic efficiency, but what causes countries with lower economic efficiency to vanish? Perhaps in countries with lower economic efficiency, voters tend to put in a new government—but in that case it seems like really the competition between countries is on “what pleases voters”, which may not be exactly what we want but it probably isn’t too risky if we have an AGI-fueled government that’s intent-aligned with “what pleases voters”.
(It’s possible that you get politicians who look like they’re trying to please voters but once they have enough power they then serve their own interests, but this looks like “the government gains power, and the people no longer have effective control over government”.)
I’m not sure that absent that there actually is competition between countries—what are they even competing on? You’re reasoning as though they compete on economic efficiency, but what causes countries with lower economic efficiency to vanish?
I guess ultimately they’re competing to colonize the universe, or be one of the world powers that have some say in the fate of the universe? Absent military conflict, the less efficient countries won’t disappear, but they’ll fall increasingly behind in control of resources and overall bargaining power, and their opinions just won’t be reflected much in how the universe turns out.
In that case this model would only hold if governments:
Actually think through the long-term implications of AI
Think about this particular argument
Have enough certainty in this argument to actually act upon it
Notably, there aren’t any feedback loops for the thing-being-competed-on, and so natural-selection style optimization doesn’t happen. This makes me much less likely to believe in arguments of the form “The thing-being-competed-on will have a high value, because there is competition”—the mechanism that usually makes that true is natural selection or some equivalent.
I think I oversimplified my model there. Actually competing to colonize/influence the universe will be the last stage, when the long-term implications of AI and of this particular argument will already be clear. Before that, the dynamics would be driven more by things like internal political and economic processes (some countries already have authoritarian governments and would naturally gravitate towards more centralization of power through political means, and others do not have strong laws/institutions to prevent centralization of the economy through market forces), competition for power (such as diplomatic and military power) and prestige (both of which are desired by leaders and voters alike) on the world stage, and direct military conflicts.
All of these forces create pressure towards greater AGI-based centralization, while the only thing pushing against it appears to be political pressure in some countries against centralization of power. If those countries succeed in defending against centralization but fall significantly behind in economic growth as a result, they will end up not influencing the future of the universe much so we might as well ignore them and focus on the others.
Yes, that’s right, though I can see how it’s confusing based on the economics literature. Any suggestions for an alternative phrase? I was considering “communication costs”, but there could also be costs from the fact that different parts have different competencies.
This is longer, but maybe “coordination costs that are unrelated to value differences”?
It’s not clear to me that principal-agent costs are more important than the ones I’m talking about here. My experience of working in large companies is not that I was misaligned with the company, it was that the company’s “plan” (to the extent that one existed) was extremely large and complex and not something I could easily understand. It could be that this is actually the most efficient way to work even with intent-aligned agents, since communicating the full plan could involve very large communication costs.
If companies had fully aligned workers and managers, they could adopt what Robin Hanson calls the “divisions” model where each division works just like a separate company except that there is an overall CEO that “looks for rare chances to gain value by coordinating division activities” (such as, in my view, internally charge each other efficient prices instead of profit-maximizing prices), so you’d still gain efficiency as companies merge or get bigger through organic growth. In other words, coordination costs that are unrelated to value differences won’t stop a single AGI controlling all resources from being the most efficient way to organize an economy.
While searching for that post, I also came across Firm Inefficiency which like Moral Mazes (but much more concisely) lists many inefficiencies that seem all or mostly related to value differences.
Seems reasonable, though I don’t think it is arguing against the main arguments in favor of CAIS (which to me are that CAIS seems more technically feasible than AGI).
I think it’s at least one of the main arguments that Eric Drexler makes, since he wrote this in his abstract:
Perhaps surprisingly, strongly self-modifying agents lose their instrumental value even as their
implementation becomes more accessible, while the likely context for
the emergence of such agents becomes a world already in possession of
general superintelligent-level capabilities.
(My argument says that a strongly self-modifying agent will improve faster than a self-improving ecosystem of CAIS with access to the same resources, because the former won’t suffer from principal-agent costs while researching how to self-improve.)
I don’t see how this suggests that our existing institutions to prevent centralization of power will go away, since even now monopolies could merge, often want to merge, but are prevented by law from doing so. (Though I’m not very confident in this claim, I’m mostly parroting back things I’ve heard.)
Yeah I’m not very familiar with this either, but my understanding is that such mergers are only illegal if the effect “may be substantially to lessen competition” or “tend to create a monopoly”, which technically (it seems to me) isn’t the case when existing monopolies in different industries merge.
If companies had fully aligned workers and managers, they could adopt what Robin Hanson calls the “divisions” model where each division works just like a separate company except that there is an overall CEO that “looks for rare chances to gain value by coordinating division activities”
Once you switch to the “divisions” model your divisions are no longer competing with other firms, and all the divisions live or die as a group. So you’re giving up the optimization that you could get via observing which companies succeed / fail at division-level tasks. I’m not sure how big this effect is, though I’d guess it’s small.
While searching for that post, I also came across Firm Inefficiency which like Moral Mazes (but much more concisely) lists many inefficiencies that seem all or mostly related to value differences.
Yeah, I’m more convinced now that principal-agent issues are significantly larger than other issues.
I think it’s at least one of the main arguments that Eric Drexler makes, since he wrote this in his abstract
Yeah, I agree it’s an argument against that argument from Eric. I forgot that Eric makes that point (mainly because I have never been very convinced by it)
Yeah I’m not very familiar with this either, but my understanding is that such mergers are only illegal if the effect “may be substantially to lessen competition” or “tend to create a monopoly”, which technically (it seems to me) isn’t the case when existing monopolies in different industries merge.
My guess would be that the spirit of the law would apply, and that would be enough, but really I’d want to ask a social scientist or lawyer.
Once you switch to the “divisions” model your divisions are no longer competing with other firms, and all the divisions live or die as a group.
Why? Each division can still have separate profit-loss accounting, so you can decide to shut one down if it starts making losses, and the benefits of having that division to the rest of the company doesn’t outweigh the losses. The latter may be somewhat tricky to judge though. Perhaps that’s what you meant?
Yeah, I’m more convinced now that principal-agent issues are significantly larger than other issues.
I should perhaps mention that I still have some uncertainty about this, mainly because Robin Hanson said “There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination.” But I haven’t been able to find any place where he wrote down what those other factors are, nor did he answer when I asked him about it.
Why? Each division can still have separate profit-loss accounting, so you can decide to shut one down if it starts making losses, and the benefits of having that division to the rest of the company doesn’t outweigh the losses. The latter may be somewhat tricky to judge though. Perhaps that’s what you meant?
That’s a good point. I was imagining that each division ends up becoming a monopoly in its particular area due to the benefits of within-firm coordination, which means that even if the division is inefficient there isn’t an alternative that the firm can go with. But that was an assumption, and I’m not sure it would actually hold.
I like the practice of posting the planned summary and opinion – not sure if you’ve been doing that awhile or just started but I think it’s quite good as a practice.
I do it sometimes, especially when I expect it to be more controversial, and also when I’m not doing it super last minute an hour before the newsletter goes out. I’m a bit behind on everything but do want to move in this direction in the future.
This won’t happen with companies, because we already have institutions that prevent companies from gaining too much power, and there doesn’t seem to be a strong reason to expect that to stop
...why do you expect those institutions to hold up in a world dominated by AGI or other powerful AI systems? (Maybe specifically, which institutions do you mean? The main options seem like ‘governments’ and ‘other companies’)e
US Government (I’m not sure about other governments) seems to lag something like 10 years behind technological developments. (This is a rough guess based on how long I recall seeing the government take significant actions around regulating stuff – Epistemic status: based on news article that made it to my eyeballs… usually via facebook).
And that’s before they start trying to take signification actions, which usually still seem pretty confused. (i.e. GDPR doesn’t really incentivize the things it needed to incentivize)
Assuming I’m roughly correct that there’s a lag, there’d be a several year window where the institutions that normally regulate companies are going to be too confused and disoriented to do so. You might hope that those governments are also being empowered by advanced AI stuff, but I’m approximately as worried about that as I am about companies.
(I realize I didn’t get that specific about the details, which are complicated, but I was somewhat surprised by your entire final paragraph and I’m not sure where the disagreement lies)
Maybe specifically, which institutions do you mean?
Governments, and specifically antitrust law.
I think there are big differences between the current situation and previous technologies: a) it is higher-stakes and b) even industry seems to be somewhat pro-regulation.
Assuming I’m roughly correct that there’s a lag, there’d be a several year window where the institutions that normally regulate companies are going to be too confused and disoriented to do so.
I’m trying to cache this out into a more concrete failure story. Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says “you’re too big, you need to be broken up” and the company says “no” and takes over the government?
Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says “you’re too big, you need to be broken up” and the company says “no” and takes over the government?
Sort of, but worse – I’m imagining something more like “the government has already has a lot of regulatory capture going on, so the system-as-is is already fairly broken. Even given slow-ish takeoff assumptions, it seems like within 2-3 years there will either be one-or-several-companies that have gained unprecedented amounts of power. And by the time the government has even figured out an action to take, it will either have already been taken over, regulatory-captured in ways much deeper than previously, or rendered irrelevant.”
Okay, I see, that makes sense and seems plausible, though I’d bet against it happening. But you’ve convinced me that I should qualify that sentence more.
I suppose another way this could happen is that the company could set up a branch in a much poorer and easily corrupted nation, since it’s not constrained by people, it could build up a very large amount of power in a place that’s beyond the reach of a superpower’s anti-trust institutions.
I suppose that’s true. Although assuming that the company has developed intent aligned AGI, I don’t see why the entire branch couldn’t be automated, with the exception of a couple of human figureheads. Even if the AGI isn’t good enough to do AI research, or the company doesn’t trust it to do that, there are other methods for the company to grow. For instance, it could set up fully automated mining operations and factories in the corrupted country.
Oh, right, I forgot we were considering the setting where we already have AGI systems that can be intent aligned. This seems like a plausible story, though it only implies that there is centralization within the corrupted nation.
Planned summary:
Economies of scale would normally mean that companies would keep growing larger and larger. With human employees, the coordination costs grow superlinearly, which ends up limiting the size to which a company can grow. However, with the advent of AGI, many of these coordination costs will be removed. If we can align AGIs to particular humans, then a corporation run by AGIs aligned to a single human would at least avoid principal-agent costs. As a result, the economies of scale would dominate, and companies would grow much larger, leading to more centralization.
Planned opinion:
This argument is quite compelling to me under the assumption of human-level AGI systems that can be intent-aligned. Note though that while the development of AGI systems removes principal-agent problems, it doesn’t remove issues that arise due to information asymmetry.
It does seem like this doesn’t hold with something like CAIS, where each AI service is optimized for a particular task, since there likely will be principal-agent problems between services.
It seems like the argument should mainly make us more worried about stable authoritarian regimes: the main effect based on this argument is a centralization of power in the hands of the AGI’s overseers. This won’t happen with companies, because we already have institutions that prevent companies from gaining too much power, and there doesn’t seem to be a strong reason to expect that to stop. It could happen with government, but if long-term governmental power still rests with the people via democracy, that seems okay. So the risky situation seems to be when the government gains power, and the people no longer have effective control over government. (This would include scenarios with e.g. a government that has sufficiently good AI-fueled propaganda that they always win elections, regardless of whether their governing is actually good.)
Thanks, I appreciate the chance to clarify/discuss before your newsletter goes out!
By “asymmetric information” I was referring to some specific ideas in economics, the short version of which is that if two people have different values, they often can’t just ask each other to honestly tell them their private information. For example, part of the reason why monopolies are inefficient is that they can’t do perfect price discrimination. If a monopolist knew exactly how much each potential buyer values their product, they could just charge that buyer one penny less than that value (plus the buyer’s transaction cost) and the buyer would still make the purchase. There would be no deadweight loss to the economy because all positive-value transactions would go through. But because the buyer won’t tell them how much they really value the product (if the monopolist charges one penny less than whatever they say, they’d just all give a low value), the monopolist ends up trying to maximize profit by charging a price that causes some potentially valuable trades to not go through.
So principal-agent problems are a subset of asymmetric information problems, and I think AGI would solve such problems because copies of a single AGI all have the same values (or are at least much closer to this than the situation with different humans) so they can just ask each other to honestly tell them whatever information they need.
ETA: By “information asymmetry” did you mean something more like the fact that different copies or parts of an AGI can have access to different information and it can be costly (on a technical level) to propagate that information across the whole AGI? If so that seems like a much smaller cost than the kind of cost from “asymmetric information” that I’m talking about. Also it seems like it would be good to use a different phrase to talk about what you mean, so people don’t confuse it with the concept of “asymmetric information” in economics. (I’ll also add a link to that phrase in the post to clarify what I’m referring to.)
This seems right to me, so I think contra Drexler, this is another reason to expect a strong competitive pressure to move from CAIS to AGI.
Today, when companies get too big, they often become monopolists, and that (plus their huge internal coordination costs) tend to make the overall efficiency of the economy worse, so competitive pressures between countries force institutions into existence to limit the size of companies. With AGI-operated companies, these problems become smaller because monopolies in different industries can merge without being limited by internal coordination costs and these merged companies can internally charge each other efficient prices. In the limit of a single AGI controlling the whole economy, all such inefficiencies go away. So while institutions that prevent companies from gaining too much power will perhaps persist for a while due to inertia (but even that’s unclear, as Raemon suggests), that probably won’t last very long when selection pressure for such institutions switches direction.
ETA: On second thought, part of the reason for such institutions to exist must also be domestic political pressure (from people who are afraid of too much concentration of power), so at least that pressure would persist in countries where such pressure exists or has much force in the first place.
Yes, that’s right, though I can see how it’s confusing based on the economics literature. Any suggestions for an alternative phrase? I was considering “communication costs”, but there could also be costs from the fact that different parts have different competencies.
It’s not clear to me that principal-agent costs are more important than the ones I’m talking about here. My experience of working in large companies is not that I was misaligned with the company, it was that the company’s “plan” (to the extent that one existed) was extremely large and complex and not something I could easily understand. It could be that this is actually the most efficient way to work even with intent-aligned agents, since communicating the full plan could involve very large communication costs.
(I agree that the Moral Mazes arguments are primarily about principal-agent problems, but I don’t know how much to believe Moral Mazes.)
Seems reasonable, though I don’t think it is arguing against the main arguments in favor of CAIS (which to me are that CAIS seems more technically feasible than AGI).
I don’t see how this suggests that our existing institutions to prevent centralization of power will go away, since even now monopolies could merge, often want to merge, but are prevented by law from doing so. (Though I’m not very confident in this claim, I’m mostly parroting back things I’ve heard.)
Right, but that requires government buy-in, which is exactly my model of risk in the opinion I wrote.
Yeah, that’s my primary model here. I’d be surprised but not shocked if competition between countries explained most of the effect.
It seems worth noting here that when it looked for a while like the planned economy of the Soviet Union might outperform western free market economies (and even before that, when many intellectuals just thought based on theory that central planning would perform better *), there were a lot of people in the west who supported switching to socialism / central planning. Direct military competition (which Carl’s paper focuses on more) would make this pressure even stronger. So if one country switches to the “one AGI controls everything” model (either deliberately or due to weak/absent existing institutions that work against centralization), it seems hard for other countries to hold out in the long run.
Does that seem right to you, or do you see things turn out a different way (in the long run)?
(* I realize this is also a cautionary tale about using theory to predict the future, like I’m trying to do now.)
I agree that direct military competition would create such a pressure.
I’m not sure that absent that there actually is competition between countries—what are they even competing on? You’re reasoning as though they compete on economic efficiency, but what causes countries with lower economic efficiency to vanish? Perhaps in countries with lower economic efficiency, voters tend to put in a new government—but in that case it seems like really the competition between countries is on “what pleases voters”, which may not be exactly what we want but it probably isn’t too risky if we have an AGI-fueled government that’s intent-aligned with “what pleases voters”.
(It’s possible that you get politicians who look like they’re trying to please voters but once they have enough power they then serve their own interests, but this looks like “the government gains power, and the people no longer have effective control over government”.)
I guess ultimately they’re competing to colonize the universe, or be one of the world powers that have some say in the fate of the universe? Absent military conflict, the less efficient countries won’t disappear, but they’ll fall increasingly behind in control of resources and overall bargaining power, and their opinions just won’t be reflected much in how the universe turns out.
In that case this model would only hold if governments:
Actually think through the long-term implications of AI
Think about this particular argument
Have enough certainty in this argument to actually act upon it
Notably, there aren’t any feedback loops for the thing-being-competed-on, and so natural-selection style optimization doesn’t happen. This makes me much less likely to believe in arguments of the form “The thing-being-competed-on will have a high value, because there is competition”—the mechanism that usually makes that true is natural selection or some equivalent.
I think I oversimplified my model there. Actually competing to colonize/influence the universe will be the last stage, when the long-term implications of AI and of this particular argument will already be clear. Before that, the dynamics would be driven more by things like internal political and economic processes (some countries already have authoritarian governments and would naturally gravitate towards more centralization of power through political means, and others do not have strong laws/institutions to prevent centralization of the economy through market forces), competition for power (such as diplomatic and military power) and prestige (both of which are desired by leaders and voters alike) on the world stage, and direct military conflicts.
All of these forces create pressure towards greater AGI-based centralization, while the only thing pushing against it appears to be political pressure in some countries against centralization of power. If those countries succeed in defending against centralization but fall significantly behind in economic growth as a result, they will end up not influencing the future of the universe much so we might as well ignore them and focus on the others.
This is longer, but maybe “coordination costs that are unrelated to value differences”?
If companies had fully aligned workers and managers, they could adopt what Robin Hanson calls the “divisions” model where each division works just like a separate company except that there is an overall CEO that “looks for rare chances to gain value by coordinating division activities” (such as, in my view, internally charge each other efficient prices instead of profit-maximizing prices), so you’d still gain efficiency as companies merge or get bigger through organic growth. In other words, coordination costs that are unrelated to value differences won’t stop a single AGI controlling all resources from being the most efficient way to organize an economy.
While searching for that post, I also came across Firm Inefficiency which like Moral Mazes (but much more concisely) lists many inefficiencies that seem all or mostly related to value differences.
I think it’s at least one of the main arguments that Eric Drexler makes, since he wrote this in his abstract:
(My argument says that a strongly self-modifying agent will improve faster than a self-improving ecosystem of CAIS with access to the same resources, because the former won’t suffer from principal-agent costs while researching how to self-improve.)
Yeah I’m not very familiar with this either, but my understanding is that such mergers are only illegal if the effect “may be substantially to lessen competition” or “tend to create a monopoly”, which technically (it seems to me) isn’t the case when existing monopolies in different industries merge.
Once you switch to the “divisions” model your divisions are no longer competing with other firms, and all the divisions live or die as a group. So you’re giving up the optimization that you could get via observing which companies succeed / fail at division-level tasks. I’m not sure how big this effect is, though I’d guess it’s small.
Yeah, I’m more convinced now that principal-agent issues are significantly larger than other issues.
Yeah, I agree it’s an argument against that argument from Eric. I forgot that Eric makes that point (mainly because I have never been very convinced by it)
My guess would be that the spirit of the law would apply, and that would be enough, but really I’d want to ask a social scientist or lawyer.
Why? Each division can still have separate profit-loss accounting, so you can decide to shut one down if it starts making losses, and the benefits of having that division to the rest of the company doesn’t outweigh the losses. The latter may be somewhat tricky to judge though. Perhaps that’s what you meant?
I should perhaps mention that I still have some uncertainty about this, mainly because Robin Hanson said “There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination.” But I haven’t been able to find any place where he wrote down what those other factors are, nor did he answer when I asked him about it.
That’s a good point. I was imagining that each division ends up becoming a monopoly in its particular area due to the benefits of within-firm coordination, which means that even if the division is inefficient there isn’t an alternative that the firm can go with. But that was an assumption, and I’m not sure it would actually hold.
I like the practice of posting the planned summary and opinion – not sure if you’ve been doing that awhile or just started but I think it’s quite good as a practice.
I do it sometimes, especially when I expect it to be more controversial, and also when I’m not doing it super last minute an hour before the newsletter goes out. I’m a bit behind on everything but do want to move in this direction in the future.
...why do you expect those institutions to hold up in a world dominated by AGI or other powerful AI systems? (Maybe specifically, which institutions do you mean? The main options seem like ‘governments’ and ‘other companies’)e
US Government (I’m not sure about other governments) seems to lag something like 10 years behind technological developments. (This is a rough guess based on how long I recall seeing the government take significant actions around regulating stuff – Epistemic status: based on news article that made it to my eyeballs… usually via facebook).
And that’s before they start trying to take signification actions, which usually still seem pretty confused. (i.e. GDPR doesn’t really incentivize the things it needed to incentivize)
Assuming I’m roughly correct that there’s a lag, there’d be a several year window where the institutions that normally regulate companies are going to be too confused and disoriented to do so. You might hope that those governments are also being empowered by advanced AI stuff, but I’m approximately as worried about that as I am about companies.
(I realize I didn’t get that specific about the details, which are complicated, but I was somewhat surprised by your entire final paragraph and I’m not sure where the disagreement lies)
Governments, and specifically antitrust law.
I think there are big differences between the current situation and previous technologies: a) it is higher-stakes and b) even industry seems to be somewhat pro-regulation.
I’m trying to cache this out into a more concrete failure story. Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says “you’re too big, you need to be broken up” and the company says “no” and takes over the government?
Sort of, but worse – I’m imagining something more like “the government has already has a lot of regulatory capture going on, so the system-as-is is already fairly broken. Even given slow-ish takeoff assumptions, it seems like within 2-3 years there will either be one-or-several-companies that have gained unprecedented amounts of power. And by the time the government has even figured out an action to take, it will either have already been taken over, regulatory-captured in ways much deeper than previously, or rendered irrelevant.”
Okay, I see, that makes sense and seems plausible, though I’d bet against it happening. But you’ve convinced me that I should qualify that sentence more.
I suppose another way this could happen is that the company could set up a branch in a much poorer and easily corrupted nation, since it’s not constrained by people, it could build up a very large amount of power in a place that’s beyond the reach of a superpower’s anti-trust institutions.
You’d have to get the employees to move there, which seems like a dealbreaker currently given how hot of a commodity AI researchers are.
I suppose that’s true. Although assuming that the company has developed intent aligned AGI, I don’t see why the entire branch couldn’t be automated, with the exception of a couple of human figureheads. Even if the AGI isn’t good enough to do AI research, or the company doesn’t trust it to do that, there are other methods for the company to grow. For instance, it could set up fully automated mining operations and factories in the corrupted country.
Oh, right, I forgot we were considering the setting where we already have AGI systems that can be intent aligned. This seems like a plausible story, though it only implies that there is centralization within the corrupted nation.