(1) Like Holden and Charlie said, they won’t be human level for long.
(2) Yes, we’ve seen this many times throughout history. The conquistadors, for example. But at least with human-on-human conflicts in the past, the losing side often ends up surviving and integrated somewhat into the new regime, albeit in positions of servitude (e.g. slavery, or living on handouts from sympathetic invading priests). Because the winners judge that it is in their economic interest to keep the losers around instead of genociding them all. In an AI-on-human conflict, if humans lose, there would shortly be zero economic benefit to having humans around, and also probably the difference in values/goals/etc. between AIs and humans will be greater than the difference between human groups, so there’s less reason to expect sympathy/handouts.
One point that Holden elided (so maybe he wouldn’t want me to argue this way) is that a population of human-level AIs is not going to stay human level for long.
Humans aren’t at some special gateway in the space of minds—we’re just the first ape species that got smart enough to discover writing. I’m not optimal, and I have a missing appendix to prove it. The point is, whatever process smartened these AIs up to the nebulous human level isn’t going to suddenly hit a brick wall, because there is no wall here.
But as I said, if Holden was writing this reply he’d probably try to argue without appeal to this fact. He’d probably say something about even if we just treat AIs as “like humans, but with different reproductive cycles and conditions for life,” having a couple million of them trying to kill all humans is still dire news. Maybe something about how even North Korea’s dictators still have some restraint born of self-preservation, but AIs might be happy to make the earth uninhabitable for human life because they have different conditions for survival.
(Chiming in late, sorry!) My main answer is that it’s quite analogous to such a collision, and such collisions are often disastrous for the losing side. The difference here would simply be that AI could end up with enough numbers/resources to overpower all of humanity combined (analogously to how one population sometimes overpowers another, but with higher stakes).
How different is a population of human level AIs with different goals from a population of humans with different goals?
Haven’t we seen a preview of this when civilizations collide and/or nation states compete?
(1) Like Holden and Charlie said, they won’t be human level for long.
(2) Yes, we’ve seen this many times throughout history. The conquistadors, for example. But at least with human-on-human conflicts in the past, the losing side often ends up surviving and integrated somewhat into the new regime, albeit in positions of servitude (e.g. slavery, or living on handouts from sympathetic invading priests). Because the winners judge that it is in their economic interest to keep the losers around instead of genociding them all. In an AI-on-human conflict, if humans lose, there would shortly be zero economic benefit to having humans around, and also probably the difference in values/goals/etc. between AIs and humans will be greater than the difference between human groups, so there’s less reason to expect sympathy/handouts.
One point that Holden elided (so maybe he wouldn’t want me to argue this way) is that a population of human-level AIs is not going to stay human level for long.
Humans aren’t at some special gateway in the space of minds—we’re just the first ape species that got smart enough to discover writing. I’m not optimal, and I have a missing appendix to prove it. The point is, whatever process smartened these AIs up to the nebulous human level isn’t going to suddenly hit a brick wall, because there is no wall here.
But as I said, if Holden was writing this reply he’d probably try to argue without appeal to this fact. He’d probably say something about even if we just treat AIs as “like humans, but with different reproductive cycles and conditions for life,” having a couple million of them trying to kill all humans is still dire news. Maybe something about how even North Korea’s dictators still have some restraint born of self-preservation, but AIs might be happy to make the earth uninhabitable for human life because they have different conditions for survival.
(Chiming in late, sorry!) My main answer is that it’s quite analogous to such a collision, and such collisions are often disastrous for the losing side. The difference here would simply be that AI could end up with enough numbers/resources to overpower all of humanity combined (analogously to how one population sometimes overpowers another, but with higher stakes).