This is a form of negative utilitarianism, and inherits the major problems with that theory (such as its endorsement of destroying the universe to stop all the frustrated preferences going on it in it right now).
Well hold on. Is destroying the universe easier than just eliminating the frustration but leaving the universe intact? I mean, surely forcibly wireheading everyone is easier than destroying the entire damned universe ;-).
It might, but it would be one that was outweighed by the larger number of preference-satisfactions to be gained from doing so, just like the disutility of torturing someone for 50 years is outweighed by the utility of avoiding 3^^^3 dust-speck incidents (for utilitarian utility functions).
True, but utility monsters and tile-the-universe-in-your-favorite-sapients also work for utilitarianism. Naive utilitarianism breaks down from the sheer fact that real populations are not Gaian hiveminds who experience each other’s joy and suffering as one.
Even if you believe in such a thing as emotional utility that matters somehow at all, you can still point out that the dust-speckers are suffering at the absolute minimum level they can even notice, and that surely they can freaking cope with it to keep some poor bastard from being tortured horrifically for 50 years straight.
(Sorry, I’m a total bullet-dodger on ethical matters.)
Yes, but we’re talking about abstract ethical theories, so we’re already playing as the AI. An AI designed to minimize frustrated preferences will find it easier (that is, a better ratio of value to effort) to wirehead than to kill, unless the frustration-reduction of killing an individual is greater than the frustration-creation happening to all the individuals who are now mourning, scared, screaming in pain from shrapnel, etc.
How exactly could the option 2A be easier than 2B? No one is mourning, because eveyone alive is wireheaded. And surely killing someone is less work than keeping them alive.
Yes, but the point is not to speculate about AI, it’s to speculate about the particular ethical system in question, that being negative utilitarianism. You can assume that we’re modelling an agent who faithfully implements negative utilitarianism, not some random paper-clipper.
Yes, and my claim is that, given the amount of suffering in the world, negative utilitarianism says that building a paperclipper is a good thing to do (provided it’s sufficiently easy).
Ok, again, let’s assume we’re already “playing as the AI”. We are already possessed of superintelligence. Whatever we decide is negutilitarian good, we can feasibly do.
Given that, we can either wirehead everyone and eliminate their suffering forever, or rewrite ourselves as a paper-clipper and kill them.
Which one of these options do you think is negutilitarian!better?
Which one of these options do you think is negutilitarian!better?
If the first is easier (i.e. costs less utility to implement), or if they’re equally easy to implement, the first.
If the second is easier, it would depend on how much easier it was, and the answer could well be the second.
A superintelligence is still subject to tradeoffs.
But even if it turns out that wireheading is better on net than paperclipping, (a) that’s not an outcome I’m happy with, and (b) paperclipping is still better (according to negative utilitarianism) than the status quo. This is more than enough to reject negative utilitarianism.
Well hold on. Is destroying the universe easier than just eliminating the frustration but leaving the universe intact? I mean, surely forcibly wireheading everyone is easier than destroying the entire damned universe ;-).
True, but utility monsters and tile-the-universe-in-your-favorite-sapients also work for utilitarianism. Naive utilitarianism breaks down from the sheer fact that real populations are not Gaian hiveminds who experience each other’s joy and suffering as one.
Even if you believe in such a thing as emotional utility that matters somehow at all, you can still point out that the dust-speckers are suffering at the absolute minimum level they can even notice, and that surely they can freaking cope with it to keep some poor bastard from being tortured horrifically for 50 years straight.
(Sorry, I’m a total bullet-dodger on ethical matters.)
It could be, if (for example) building UFAI turns out to be easier than eliminating the frustration.
Yes, but we’re talking about abstract ethical theories, so we’re already playing as the AI. An AI designed to minimize frustrated preferences will find it easier (that is, a better ratio of value to effort) to wirehead than to kill, unless the frustration-reduction of killing an individual is greater than the frustration-creation happening to all the individuals who are now mourning, scared, screaming in pain from shrapnel, etc.
Step 1: Wirehead all the people.
Step 2A: Continue caring about them.
Step 2B: Kill them.
How exactly could the option 2A be easier than 2B? No one is mourning, because eveyone alive is wireheaded. And surely killing someone is less work than keeping them alive.
Doesn’t matter. If humans can build an AI, an AI can build an AI as well.
Yes, but the point is not to speculate about AI, it’s to speculate about the particular ethical system in question, that being negative utilitarianism. You can assume that we’re modelling an agent who faithfully implements negative utilitarianism, not some random paper-clipper.
Yes, and my claim is that, given the amount of suffering in the world, negative utilitarianism says that building a paperclipper is a good thing to do (provided it’s sufficiently easy).
Ok, again, let’s assume we’re already “playing as the AI”. We are already possessed of superintelligence. Whatever we decide is negutilitarian good, we can feasibly do.
Given that, we can either wirehead everyone and eliminate their suffering forever, or rewrite ourselves as a paper-clipper and kill them.
Which one of these options do you think is negutilitarian!better?
If the first is easier (i.e. costs less utility to implement), or if they’re equally easy to implement, the first.
If the second is easier, it would depend on how much easier it was, and the answer could well be the second.
A superintelligence is still subject to tradeoffs.
But even if it turns out that wireheading is better on net than paperclipping, (a) that’s not an outcome I’m happy with, and (b) paperclipping is still better (according to negative utilitarianism) than the status quo. This is more than enough to reject negative utilitarianism.
Neither of us is happy with wireheading. Still, it’s better to be accurate about why we’re rejecting negutilitarianism.
The fact that it prefers paperclipping to the status quo is enough for me (and consistent with what I originally wrote).