If the AI has a naive “save humans” utility function, I don’t see how this advantages it.
I’ve met people who can lucidly argue that nuking a particular city or small region would produce many benefits for humanity as a whole, including reduced risk of politically-motivated extinction events down the line.
Also… you’re going to an awful lot of trouble, here, to calculate a firing solution for a beam of light to hit a non-accelerating object in space. Realistically, if we know where the comet is well enough to realize it’s headed for Earth, aiming a laser at it with non-sapient hardware is almost trivial. Why not an NP-complete problem?
I’ve met people who can lucidly argue that nuking a particular city or small region would produce many benefits for humanity as a whole, including reduced risk of politically-motivated extinction events down the line.
Also… you’re going to an awful lot of trouble, here, to calculate a firing solution for a beam of light to hit a non-accelerating object in space. Realistically, if we know where the comet is well enough to realize it’s headed for Earth, aiming a laser at it with non-sapient hardware is almost trivial. Why not an NP-complete problem?
Why would an intelligent agent do better at an NP-complete problem than an unintelligent algorithm?
The laser problem is an illustration, a proof of concept of a developing idea. If that is deemed to work, I’ll see how general we can make it.