“human-level AI” is a confusing term for at least a couple reasons: first, there is a gigantic performance range even if you consider only the top 1% of humanity and second it’s not clear that human-level general learning systems won’t be intrinsically superhuman because of things like scalable substrate and extraordinarily high bandwidth access (compared to eyes, ears, and mouths) to lossless information. That these apparent issues are not more frequently enumerated in the context of early AGI is confusing.
As far as I’m aware all serious attempts to take over the world have been by brute force. Historically there are messaging, travel, logistics etc latencies that make this very difficult within one’s lifetime even if potentially world-owning force is available or capable of being mustered. So the window for a single entity (human-level) to take over the world within its lifetime has probably only opened recently, and the number of externalities and internal abilities needed to line up to have a predictably large shot at success are probably many. Accordingly, even situations like Hitler sitting in control of a very powerful Reich which nominally might appear to enable a chance of world ownership are still too fraught with an unoptimized distribution of enabling factors to have any realistic chance of world ownership. There is also a grey area of whether an individual or some collective is responsible for the attempt. One might argue that trends ongoing for at least a few decades suggest that the USA is in a great position to take over the world if China (or someone else) doesn’t “break out” first. But with the way the USA is structured it may be difficult for any “human-level” individual entity to take credit for, or enjoy a firm grasp of the fruits of this conquest.
“human-level AI” is a confusing term for at least a couple reasons: first, there is a gigantic performance range even if you consider only the top 1% of humanity and second it’s not clear that human-level general learning systems won’t be intrinsically superhuman because of things like scalable substrate and extraordinarily high bandwidth access (compared to eyes, ears, and mouths) to lossless information. That these apparent issues are not more frequently enumerated in the context of early AGI is confusing.
As far as I’m aware all serious attempts to take over the world have been by brute force. Historically there are messaging, travel, logistics etc latencies that make this very difficult within one’s lifetime even if potentially world-owning force is available or capable of being mustered. So the window for a single entity (human-level) to take over the world within its lifetime has probably only opened recently, and the number of externalities and internal abilities needed to line up to have a predictably large shot at success are probably many. Accordingly, even situations like Hitler sitting in control of a very powerful Reich which nominally might appear to enable a chance of world ownership are still too fraught with an unoptimized distribution of enabling factors to have any realistic chance of world ownership. There is also a grey area of whether an individual or some collective is responsible for the attempt. One might argue that trends ongoing for at least a few decades suggest that the USA is in a great position to take over the world if China (or someone else) doesn’t “break out” first. But with the way the USA is structured it may be difficult for any “human-level” individual entity to take credit for, or enjoy a firm grasp of the fruits of this conquest.