AGI will drastically increase economies of scale
In Strategic implications of AIs’ ability to coordinate at low cost, I talked about the possibility that different AGIs can coordinate with each other much more easily than humans can, by doing something like merging their utility functions together. It now occurs to me that another way for AGIs to greatly reduce coordination costs in an economy is by having each AGI or copies of each AGI profitably take over much larger chunks of the economy (than companies currently own), and this can be done with AGIs that don’t even have explicit utility functions, such as copies of an AGI that are all corrigible/intent-aligned to a single person.
Today, there are many industries with large economies of scale, due to things like fixed costs, network effects, and reduced deadweight loss when monopolies in different industries merge (because they can internally charge each other prices that equal marginal costs), but because coordination costs among humans increase super-linearly with the number of people involved (see Moral Mazes and Short Termism for a related recent discussion), that creates diseconomies of scale which counterbalance the economies of scale, so companies tend to grow to a certain size and then stop. But an AGI-operated company, where for example all the workers are AGIs that are intent-aligned to the CEO, would eliminate almost all of the internal coordination costs (i.e., all of the coordination costs that are caused by value differences, such as all the things described in Moral Mazes, “market for lemons” or lost opportunities for trade due to asymmetric information, principal-agent problems, monitoring/auditing costs, costly signaling, and suboptimal Nash equilibria in general), allowing such companies to grow much bigger. In fact, from purely the perspective of maximizing the efficiency/output of an economy, I don’t see why it wouldn’t be best to have (copies of) one AGI control everything.
If I’m right about this, it seems quite plausible that some countries will foresee it too, and as soon as it can feasibly be done, nationalize all of their productive resources and place them under the control of one AGI (perhaps intent-aligned to a supreme leader or to a small, highly coordinated group of humans), which would allow them to out-compete any other countries that are not willing to do this (and don’t have some other competitive advantage to compensate for this disadvantage). This seems to be an important consideration that is missing from many people’s pictures of what will happen after (e.g., intent-aligned) AGI is developed in a slow-takeoff scenario.
- The Main Sources of AI Risk? by Mar 21, 2019, 6:28 PM; 126 points) (
- AI Alignment 2018-19 Review by Jan 28, 2020, 2:19 AM; 126 points) (
- How much EA analysis of AI safety as a cause area exists? by Sep 6, 2019, 11:15 AM; 94 points) (EA Forum;
- Clarifying some key hypotheses in AI alignment by Aug 15, 2019, 9:29 PM; 79 points) (
- Response to Katja Grace’s AI x-risk counterarguments by Oct 19, 2022, 1:17 AM; 77 points) (
- Six AI Risk/Strategy Ideas by Aug 27, 2019, 12:40 AM; 73 points) (
- Analogies and General Priors on Intelligence by Aug 20, 2021, 9:03 PM; 57 points) (
- [AN #59] How arguments for AI risk have changed over time by Jul 8, 2019, 5:20 PM; 43 points) (
- Nov 28, 2023, 8:34 PM; 19 points) 's comment on My techno-optimism [By Vitalik Buterin] by (
- Jun 21, 2019, 3:34 PM; 18 points) 's comment on A case for strategy research: what it is and why we need more of it by (
- Apr 25, 2024, 3:54 AM; 15 points) 's comment on AI Regulation is Unsafe by (
- Jun 19, 2024, 9:48 AM; 9 points) 's comment on Loving a world you don’t trust by (
- Mar 21, 2021, 1:33 PM; 7 points) 's comment on Is Democracy a Fad? by (EA Forum;
- May 28, 2020, 5:16 AM; 7 points) 's comment on AGIs as collectives by (
- Jun 11, 2019, 9:27 PM; 7 points) 's comment on Long Term Future Fund applications open until June 28th by (
- Apr 13, 2021, 6:49 PM; 6 points) 's comment on What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) by (
- Mar 26, 2024, 3:22 AM; 6 points) 's comment on Daniel Kokotajlo’s Shortform by (
- Jan 16, 2020, 7:58 AM; 5 points) 's comment on The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs by (
- Nov 21, 2020, 1:38 AM; 5 points) 's comment on Some AI research areas and their relevance to existential safety by (
- Dec 22, 2021, 5:53 AM; 3 points) 's comment on Introducing a New Course on the Economics of AI by (EA Forum;
- Dec 12, 2024, 4:14 PM; 2 points) 's comment on Consider granting AIs freedom by (EA Forum;
- Dec 15, 2020, 5:19 AM; 2 points) 's comment on Strategic implications of AIs’ ability to coordinate at low cost, for example by merging by (
Seems like an important consideration, and explained concisely.