We could increase chances of this by making commitment to run in testing simulations many copies of different possible Rogue AIs after we create a Friendly AI. This is an idea by Rolf Nelson.
Moreover, as some Rogue AIs will try to emulate Friendly AI, they will take this commitment for granted and simulate other possible Rogue AIs in a nested simulation. So it becomes self-fulfilling prophecy.
We could increase chances of this by making commitment to run in testing simulations many copies of different possible Rogue AIs after we create a Friendly AI. This is an idea by Rolf Nelson.
Moreover, as some Rogue AIs will try to emulate Friendly AI, they will take this commitment for granted and simulate other possible Rogue AIs in a nested simulation. So it becomes self-fulfilling prophecy.