1) Even if it counts as a DSA, I claim that it is not very interesting in the context of AI. DSAs of something already almost as large as the world are commonplace. For instance, in the extreme, the world minus any particular person could take over the world if they wanted to. The concern with AI is that an initially tiny entity might take over the world.
2) My important point is rather that your ’30 year’ number is specific to the starting size of the thing, and not just a general number for getting a DSA. In particular, it does not apply to smaller things.
3) Agree income doesn’t equal taking over, though in the modern world where much purchasing occurs, it is closer. Not clear to me that AI companies do better as a fraction of the world in terms of military power than they do in terms of spending.
The concern with AI is that an initially tiny entity might take over the world.
This is a concern with AI, but why is it the concern. If eg. the United States could take over the world because they had some AI enabled growth, why would that not be a big deal? I’m imagining you saying, “It’s not unique to AI” but why does it need to be unique? If AI is the root cause of something on the order of Britain colonizing the world in the 19th century, this still seems like it could be concerning if there weren’t any good governing principles established beforehand.
I like your point #2; I should think more about how the 30 year number changes with size. Obviously it’s smaller for bigger entities and bigger for smaller entities, but how much? E.g. if we teleported 2020 Estonia back into 1920, would it be able to take over the world? Probably. What about 1970 though? Less clear.
Military power isn’t what I’m getting at either, at least not if measured in the way that would result in AI companies having little of it. Cortez had, maybe, 1⁄10,000th of the military power of Mexico when he got started. At least if you measure in ways like “What would happen if X fought Y.” Probably 1⁄10,000th of Mexico’s military could have defeated Cortez’ initial band.
If we try to model Cortez’ takeover as him having more of some metric than all of Mexico had, then presumably Spain had several orders of magnitude more of that metric than Cortez did, and Western Europe as a whole had at least an order of magnitude more than that. So Western Europe had *many* orders of magnitude more of this stuff, whatever it is, than Mexico, even though Mexico had a similar population and GDP. So they must have been growing much faster than Mexico for quite some time to build up such a lead—and this was before the industrial revolution! More generally, this metric that is used for predicting takeovers seems to be the sort of thing that can grow and/or shrink orders of magnitude very quickly, as illustrated by the various cases throughout history of small groups from backwater regions taking over rich empires.
(Warning: I’m pulling these claims out of my ass, I’m not a historian, I might be totally wrong. I should look up these numbers.)
1) Even if it counts as a DSA, I claim that it is not very interesting in the context of AI. DSAs of something already almost as large as the world are commonplace. For instance, in the extreme, the world minus any particular person could take over the world if they wanted to. The concern with AI is that an initially tiny entity might take over the world.
2) My important point is rather that your ’30 year’ number is specific to the starting size of the thing, and not just a general number for getting a DSA. In particular, it does not apply to smaller things.
3) Agree income doesn’t equal taking over, though in the modern world where much purchasing occurs, it is closer. Not clear to me that AI companies do better as a fraction of the world in terms of military power than they do in terms of spending.
This is a concern with AI, but why is it the concern. If eg. the United States could take over the world because they had some AI enabled growth, why would that not be a big deal? I’m imagining you saying, “It’s not unique to AI” but why does it need to be unique? If AI is the root cause of something on the order of Britain colonizing the world in the 19th century, this still seems like it could be concerning if there weren’t any good governing principles established beforehand.
I like your point #2; I should think more about how the 30 year number changes with size. Obviously it’s smaller for bigger entities and bigger for smaller entities, but how much? E.g. if we teleported 2020 Estonia back into 1920, would it be able to take over the world? Probably. What about 1970 though? Less clear.
Military power isn’t what I’m getting at either, at least not if measured in the way that would result in AI companies having little of it. Cortez had, maybe, 1⁄10,000th of the military power of Mexico when he got started. At least if you measure in ways like “What would happen if X fought Y.” Probably 1⁄10,000th of Mexico’s military could have defeated Cortez’ initial band.
If we try to model Cortez’ takeover as him having more of some metric than all of Mexico had, then presumably Spain had several orders of magnitude more of that metric than Cortez did, and Western Europe as a whole had at least an order of magnitude more than that. So Western Europe had *many* orders of magnitude more of this stuff, whatever it is, than Mexico, even though Mexico had a similar population and GDP. So they must have been growing much faster than Mexico for quite some time to build up such a lead—and this was before the industrial revolution! More generally, this metric that is used for predicting takeovers seems to be the sort of thing that can grow and/or shrink orders of magnitude very quickly, as illustrated by the various cases throughout history of small groups from backwater regions taking over rich empires.
(Warning: I’m pulling these claims out of my ass, I’m not a historian, I might be totally wrong. I should look up these numbers.)