Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the “take over the universe” step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?
Depends. Would this allow someone else to move outside its defined sphere of influence and build an AI that doesn’t wait?
If the AI isn’t taking over the universe, that might leave the option open that something else will. If it doesn’t control humanity, chances are that will be another human-originated AI. If it does control humanity, why are we waiting?
Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the “take over the universe” step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?
Depends. Would this allow someone else to move outside its defined sphere of influence and build an AI that doesn’t wait?
If the AI isn’t taking over the universe, that might leave the option open that something else will. If it doesn’t control humanity, chances are that will be another human-originated AI. If it does control humanity, why are we waiting?