AGI will drastically increase economies of scale
In Strategic implications of AIs’ ability to coordinate at low cost, I talked about the possibility that different AGIs can coordinate with each other much more easily than humans can, by doing something like merging their utility functions together. It now occurs to me that another way for AGIs to greatly reduce coordination costs in an economy is by having each AGI or copies of each AGI profitably take over much larger chunks of the economy (than companies currently own), and this can be done with AGIs that don’t even have explicit utility functions, such as copies of an AGI that are all corrigible/intent-aligned to a single person.
Today, there are many industries with large economies of scale, due to things like fixed costs, network effects, and reduced deadweight loss when monopolies in different industries merge (because they can internally charge each other prices that equal marginal costs), but because coordination costs among humans increase super-linearly with the number of people involved (see Moral Mazes and Short Termism for a related recent discussion), that creates diseconomies of scale which counterbalance the economies of scale, so companies tend to grow to a certain size and then stop. But an AGI-operated company, where for example all the workers are AGIs that are intent-aligned to the CEO, would eliminate almost all of the internal coordination costs (i.e., all of the coordination costs that are caused by value differences, such as all the things described in Moral Mazes, “market for lemons” or lost opportunities for trade due to asymmetric information, principal-agent problems, monitoring/auditing costs, costly signaling, and suboptimal Nash equilibria in general), allowing such companies to grow much bigger. In fact, from purely the perspective of maximizing the efficiency/output of an economy, I don’t see why it wouldn’t be best to have (copies of) one AGI control everything.
If I’m right about this, it seems quite plausible that some countries will foresee it too, and as soon as it can feasibly be done, nationalize all of their productive resources and place them under the control of one AGI (perhaps intent-aligned to a supreme leader or to a small, highly coordinated group of humans), which would allow them to out-compete any other countries that are not willing to do this (and don’t have some other competitive advantage to compensate for this disadvantage). This seems to be an important consideration that is missing from many people’s pictures of what will happen after (e.g., intent-aligned) AGI is developed in a slow-takeoff scenario.
- AI Alignment 2018-19 Review by 28 Jan 2020 2:19 UTC; 126 points) (
- The Main Sources of AI Risk? by 21 Mar 2019 18:28 UTC; 121 points) (
- How much EA analysis of AI safety as a cause area exists? by 6 Sep 2019 11:15 UTC; 94 points) (EA Forum;
- Clarifying some key hypotheses in AI alignment by 15 Aug 2019 21:29 UTC; 79 points) (
- Response to Katja Grace’s AI x-risk counterarguments by 19 Oct 2022 1:17 UTC; 77 points) (
- Six AI Risk/Strategy Ideas by 27 Aug 2019 0:40 UTC; 69 points) (
- Analogies and General Priors on Intelligence by 20 Aug 2021 21:03 UTC; 57 points) (
- [AN #59] How arguments for AI risk have changed over time by 8 Jul 2019 17:20 UTC; 43 points) (
- 28 Nov 2023 20:34 UTC; 19 points) 's comment on My techno-optimism [By Vitalik Buterin] by (
- 21 Jun 2019 15:34 UTC; 18 points) 's comment on A case for strategy research: what it is and why we need more of it by (
- 19 Jun 2024 9:48 UTC; 9 points) 's comment on Loving a world you don’t trust by (
- 21 Mar 2021 13:33 UTC; 7 points) 's comment on Is Democracy a Fad? by (EA Forum;
- 28 May 2020 5:16 UTC; 7 points) 's comment on AGIs as collectives by (
- 11 Jun 2019 21:27 UTC; 7 points) 's comment on Long Term Future Fund applications open until June 28th by (
- 26 Mar 2024 3:22 UTC; 6 points) 's comment on Daniel Kokotajlo’s Shortform by (
- 16 Jan 2020 7:58 UTC; 5 points) 's comment on The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs by (
- 21 Nov 2020 1:38 UTC; 5 points) 's comment on Some AI research areas and their relevance to existential safety by (
- 22 Dec 2021 5:53 UTC; 3 points) 's comment on Introducing a New Course on the Economics of AI by (EA Forum;
- 15 Dec 2020 5:19 UTC; 2 points) 's comment on Strategic implications of AIs’ ability to coordinate at low cost, for example by merging by (
Seems like an important consideration, and explained concisely.