In short, surveillance costs (e.g., “make sure they aren’t plotting against you and try detonating a nuke or just starting a forest fire out of spite”) might be higher than the costs of simply killing the vast majority of people. Of course, there is some question to be had about whether it might consider it worthwhile to study some 0.00001% of humans locked in cages, but again that might involve significantly higher costs than if it just learned how to recreate humans from scratch as it did a lot of other learning about the world.
But I’ll grant that I don’t know how an AGI would think or act, and I can’t definitively rule out the possibility, at least within the first 100 years or so.
In short, surveillance costs (e.g., “make sure they aren’t plotting against you and try detonating a nuke or just starting a forest fire out of spite”) might be higher than the costs of simply killing the vast majority of people. Of course, there is some question to be had about whether it might consider it worthwhile to study some 0.00001% of humans locked in cages, but again that might involve significantly higher costs than if it just learned how to recreate humans from scratch as it did a lot of other learning about the world.
But I’ll grant that I don’t know how an AGI would think or act, and I can’t definitively rule out the possibility, at least within the first 100 years or so.