My assumption is that it would do this to prevent other people from making superintelligences that are unaligned. At least Eliezer thinks you need to do this (see bullet point 6 in this post), and I think it generally comes up in conversations people have about pivotal acts. Some people think if you think of an alignment solution that’s good and easy to implement, everyone building AGI will use it, and so you won’t have to prevent other people from building unaligned AGI, but this seems unrealistic and risky to me.
Is it generally accepted that an aligned super-intelligence will prevent other super-intelligences from existing?
My assumption is that it would do this to prevent other people from making superintelligences that are unaligned. At least Eliezer thinks you need to do this (see bullet point 6 in this post), and I think it generally comes up in conversations people have about pivotal acts. Some people think if you think of an alignment solution that’s good and easy to implement, everyone building AGI will use it, and so you won’t have to prevent other people from building unaligned AGI, but this seems unrealistic and risky to me.