Roko’s argument implies the AI will torture. The probability you think his argument is correct is a different matter. Roko was just saying that “if you think there is a 1% chance that my argument is correct”, not “if my argument is correct, there is a 1% chance the AI will torture.”
This really isn’t important though. The point is, if an AI has some likelihood of torturing you, you shouldn’t want it to be built. You can call that self-interest, but that’s admitting you don’t really want utilitarianism to begin with. Which is the point.
Anyway this is just steel-manning Roko’s argument. I think the issue is with acausal trade, not utilitarianism. And that seems to be the issue most people have with it.
Roko’s argument implies the AI will torture. The probability you think his argument is correct is a different matter. Roko was just saying that “if you think there is a 1% chance that my argument is correct”, not “if my argument is correct, there is a 1% chance the AI will torture.”
This really isn’t important though. The point is, if an AI has some likelihood of torturing you, you shouldn’t want it to be built. You can call that self-interest, but that’s admitting you don’t really want utilitarianism to begin with. Which is the point.
Anyway this is just steel-manning Roko’s argument. I think the issue is with acausal trade, not utilitarianism. And that seems to be the issue most people have with it.