And then it’s trivial to find a means to dispose of the threat, humans are fragile and stupid and have created a lot of ready means of mass destruction.
If by “a lot of ready means of mass destruction” you’re thinking of nukes, it doesn’t seem trivial to design a way to use nukes to destroy / neutralize all humans without jeopardizing the AGI’s own survival.
We don’t have a way of reliably modeling the results of very many simultaneous nuclear blasts, and it seems like the AGI probably wouldn’t have a way to reliable model this either unless it ran more empirical tests (which would be easy to notice).
It seems like an AGI wouldn’t execute a “kill all humans” plan unless it was confident that executing the plan would in expectation result in a higher chance of its own survival than not executing the plan. I don’t see how an AGI could become confident about high-variance “kill all humans” plans like using nukes without having much better predictive models than we do. (And it seems like more empirical data about what multiple simultaneous nuclear explosions do would be required to have better models for this case.)
Humans are trivial to kill. Physically, chemically, biologically or psychologically. And a combination of those would be even more effective in collapsing the human population. I will not go here into the details, to avoid arguments and negative attention. And if your argument is that humans are tough to kill, then look into the historic data of population collapse, and that was without any adversarial pressure. Or with, if you consider the indigenous population of the American continent.
If by “a lot of ready means of mass destruction” you’re thinking of nukes, it doesn’t seem trivial to design a way to use nukes to destroy / neutralize all humans without jeopardizing the AGI’s own survival.
We don’t have a way of reliably modeling the results of very many simultaneous nuclear blasts, and it seems like the AGI probably wouldn’t have a way to reliable model this either unless it ran more empirical tests (which would be easy to notice).
It seems like an AGI wouldn’t execute a “kill all humans” plan unless it was confident that executing the plan would in expectation result in a higher chance of its own survival than not executing the plan. I don’t see how an AGI could become confident about high-variance “kill all humans” plans like using nukes without having much better predictive models than we do. (And it seems like more empirical data about what multiple simultaneous nuclear explosions do would be required to have better models for this case.)
Humans are trivial to kill. Physically, chemically, biologically or psychologically. And a combination of those would be even more effective in collapsing the human population. I will not go here into the details, to avoid arguments and negative attention. And if your argument is that humans are tough to kill, then look into the historic data of population collapse, and that was without any adversarial pressure. Or with, if you consider the indigenous population of the American continent.