The only solution that I can see is morally abhorrent, and I’m trying to open a discussion looking for a better one.
It’s already been linked to a couple times under this post, but: have you read http://lesswrong.com/lw/v1/ethical_injunctions/ and the posts it links to?
In any case, non-abhorrent solutions include “work on FAI” and “talk to AGI researchers, some of whom will listen (especially if you don’t start off with how we’re all going to die unless they repent, even though that’s the natural first thought)”.
It’s already been linked to a couple times under this post, but: have you read http://lesswrong.com/lw/v1/ethical_injunctions/ and the posts it links to?
In any case, non-abhorrent solutions include “work on FAI” and “talk to AGI researchers, some of whom will listen (especially if you don’t start off with how we’re all going to die unless they repent, even though that’s the natural first thought)”.