Ok, whatever, let it be rogue asteroids—why deflecting them is not fanatical? How the kind of uncertainty that allows for so much power to be used would help with AI? It could just as well deflect earth from it’s cozy paperclip factory, while observing it’s development. And from anti-natalist viewpoint it would be a disaster to not exterminate humanity. The whole problem is that such kind of uncertainty in humans behaves like other human preferences and just calling it “uncertainty” or “non-fanatical maximization” doesn’t make it more universal.
Ok, whatever, let it be rogue asteroids—why deflecting them is not fanatical? How the kind of uncertainty that allows for so much power to be used would help with AI? It could just as well deflect earth from it’s cozy paperclip factory, while observing it’s development. And from anti-natalist viewpoint it would be a disaster to not exterminate humanity. The whole problem is that such kind of uncertainty in humans behaves like other human preferences and just calling it “uncertainty” or “non-fanatical maximization” doesn’t make it more universal.