Yes, but the point is not to speculate about AI, it’s to speculate about the particular ethical system in question, that being negative utilitarianism. You can assume that we’re modelling an agent who faithfully implements negative utilitarianism, not some random paper-clipper.
Yes, and my claim is that, given the amount of suffering in the world, negative utilitarianism says that building a paperclipper is a good thing to do (provided it’s sufficiently easy).
Ok, again, let’s assume we’re already “playing as the AI”. We are already possessed of superintelligence. Whatever we decide is negutilitarian good, we can feasibly do.
Given that, we can either wirehead everyone and eliminate their suffering forever, or rewrite ourselves as a paper-clipper and kill them.
Which one of these options do you think is negutilitarian!better?
Which one of these options do you think is negutilitarian!better?
If the first is easier (i.e. costs less utility to implement), or if they’re equally easy to implement, the first.
If the second is easier, it would depend on how much easier it was, and the answer could well be the second.
A superintelligence is still subject to tradeoffs.
But even if it turns out that wireheading is better on net than paperclipping, (a) that’s not an outcome I’m happy with, and (b) paperclipping is still better (according to negative utilitarianism) than the status quo. This is more than enough to reject negative utilitarianism.
Yes, but the point is not to speculate about AI, it’s to speculate about the particular ethical system in question, that being negative utilitarianism. You can assume that we’re modelling an agent who faithfully implements negative utilitarianism, not some random paper-clipper.
Yes, and my claim is that, given the amount of suffering in the world, negative utilitarianism says that building a paperclipper is a good thing to do (provided it’s sufficiently easy).
Ok, again, let’s assume we’re already “playing as the AI”. We are already possessed of superintelligence. Whatever we decide is negutilitarian good, we can feasibly do.
Given that, we can either wirehead everyone and eliminate their suffering forever, or rewrite ourselves as a paper-clipper and kill them.
Which one of these options do you think is negutilitarian!better?
If the first is easier (i.e. costs less utility to implement), or if they’re equally easy to implement, the first.
If the second is easier, it would depend on how much easier it was, and the answer could well be the second.
A superintelligence is still subject to tradeoffs.
But even if it turns out that wireheading is better on net than paperclipping, (a) that’s not an outcome I’m happy with, and (b) paperclipping is still better (according to negative utilitarianism) than the status quo. This is more than enough to reject negative utilitarianism.
Neither of us is happy with wireheading. Still, it’s better to be accurate about why we’re rejecting negutilitarianism.
The fact that it prefers paperclipping to the status quo is enough for me (and consistent with what I originally wrote).