I have the following idea how to solve this conundrum. A global control system capable to find all dangerous agents could be created using some Narrow AI, not superintelligent agential AI. This may look like ubiquitous surveillance with human faces and actions recognition capabilities.
Another part of this Narrow AI Nanny is its capability to provide decisive strategic advantage to the owner and help him quickly take over the world (for example, by leveraging nuclear strategy and military intelligence) - which is needed to prevent appearing of dangerous agents in other countries.
Yes, it looks like a totalitarianism and especially its Chinese version. But the extinction is worse than totalitarianism. I lived most of my life during totalitarian regimes and—I hate to said it—but 90 per cent of time the life is normal under them. So totalitarianism is survivable and calling its x-risk is overestimation.
I have the following idea how to solve this conundrum. A global control system capable to find all dangerous agents could be created using some Narrow AI, not superintelligent agential AI. This may look like ubiquitous surveillance with human faces and actions recognition capabilities.
Another part of this Narrow AI Nanny is its capability to provide decisive strategic advantage to the owner and help him quickly take over the world (for example, by leveraging nuclear strategy and military intelligence) - which is needed to prevent appearing of dangerous agents in other countries.
Yes, it looks like a totalitarianism and especially its Chinese version. But the extinction is worse than totalitarianism. I lived most of my life during totalitarian regimes and—I hate to said it—but 90 per cent of time the life is normal under them. So totalitarianism is survivable and calling its x-risk is overestimation.
I wrote more about the idea here: https://www.lesswrong.com/posts/7ysKDyQDPK3dDAbkT/narrow-ai-nanny-reaching-strategic-advantage-via-narrow-ai