Having a utility function that allows an incomprehensible greater total suffering is a failure of epistemic, not instrumental, rationality. By choosing the dust specks, you implicitly assert that more suffering than has ever been known and ever will be known on Earth, times a hundred million billion trillion, is superior to a single torture victim.
This is probably the most patronizing thing I’ll ever say on this website, but:
Think about that for a second.
I’m only pointing this out because no one else mentioned that instrumental rationality is independent of what your goals are, so that says nothing about whether your goals are simply incorrect. (You don’t really want the Dust Holocaust, do you? to allow that much suffering in exchange for, comparatively, nothing?)
I haven’t read ALL the discussion about this issue, but that’s because it’s so damned simple if you seek to feel fully the implications of that number, that number that transcends any attempt to even describe.
Edited to add: here Pfft said to just take goals and make them axiomatic. That is literally impossible. You have to have a reason for every one of your goals, and all the better to have motivations based in fact, not fancy.
I think that the Torture versus Dust Specks “paradox” was invented to show how utilitarianism (or whatever we’re calling it) can lead to on-face preposterous conclusions whenever the utility numbers get big enough. And I think that the intent was for everybody to accept this, and shut up and calculate.
However, for me, and I suspect some others, Torture versus Dust Specks and also Pascal’s Mugging have implied something rather different: that utilitarianism (or whatever we’re calling it) doesn’t work correctly when the numbers get too big.
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Let’s consider a Modified Torture versus Specks scenario: You are given the same choice as in the canonical problem, except you are also given the opportunity to collect polling data from every single one of the 3^^^3 individuals before you make your decision. You formulate the following queries:
“Would you rather experience the mild distraction of a dust speck in your eye, or allow someone else to be tortured for fifty years?”
“Would you rather be tortured for fifty years, or have someone else experience the mild discomfort of a dust speck in their eye?”
You do not mention, in either query, that you are being faced by the Torture versus Specks dilemma. You are only allowing the 3^^^3 to consider themselves and one hypothetical other.
You get the polling results back instantly. (Let’s make things simple and assume we live in a universe without clinical psychopathy.) The vast majority of respondents have chosen the “obviously correct” option.
Now you have to make your decisions knowing that the entire universe totally wouldn’t mind having dust specks in exchange for preventing suffering for one other person. If that doesn’t change your decision … something is wrong. I’m not saying something is wrong with the decision so much as something is wrong with your decision theory.
“Would you rather be tortured for a week, or have someone else be tortured for 100 years?”
“Would you rather be tortured for 100 years, or have someone else be tortured for a week?”
The popular opinion would most likely be one week in both cases, which by this logic would lead to 3^^^3 people being tortured for a week. Utilitarianism definitely does not lead to this conclusion, so the query is not equivalent to the original question.
But they’re not taking a dust-speck to prevent torture—they’re taking a dust-speck to prevent torture and cause the dust-speck holocaust. If you drop relevant information, of course you get different answers; I see no reason your representation here is more essentially accurate, and some reason it might be less.
Sure, and that is intentional. You wouldn’t bother polling the universe to determine their answer to the same paradox you’re solving.
You can look at it this way. Each person who responds to your poll is basically telling you: “Do not factor me, personally, into your utility calculation.” It is equivalent to opting out of the equation. “Don’t you dare torture someone on my behalf!” The “dust-speck holocaust” then disappears!
Imagine this: You send everyone a little message that says, “Warning! You are about to get an annoying dust speck in your eye. But, partially due to this sacrifice of your comfort, someone else will be spared horrible torture.” Would/should they care that the degree to which they contributed to saving someone from torture is infinitesimal?
Let’s go on to pretend that we asked Omega to calculate exactly how many humans with dust-specks is equivalent to one person being tortured for fifty years. Let’s pretend this number comes out to 1x10^14 people. Turns out it was much smaller than 3^^^3. Omega gives us all this information and then tells us he’s only going to give dust specks to 1x10^14 minus one people. We breath a huge sigh of relief—you don’t have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he’s changing the deal—he’s going to be putting dust speck in YOUR eye, as well.
Deciding, at this point, that you now have to torture somebody is equivalent to denying that you have the choice to say, “I can ignore this dust speck if it means torturing somebody.” Bearing in mind that you are choosing to put dust specks in an exactly equal number of eyes as you were before, plus only your own.
My example above is merely extrapolating this case to the case where each individual can decide to opt out.
But that’s not what a vote that way means; consider polling 100 individuals who are so noble as to pick 9 hours of torture over someone else getting 10. How many of them would pick torturing 99 other conditionally willing people over torturing one unwilling person? It is simply not the same question.
The correct response when Omega changes the deal is “Oh, come on! You’re making me decide between two situations that are literally within a dust speck’s worth of each other. Why bother me with such trivial questions?” Because that’s what it is. You’re not choosing between “dust speck in my eye” and “terrible thing happens”. You’re choosing between “terrible thing happens” and “infinitesimally less terrible thing happens, plus I have a dust speck in my eye.”
The first paragraph of this comment is a nitpick, but I felt impelled to it: there is no way that 10^14 dust specks is anywhere near enough to equal one torture victim. Maybe if you multiplied it by a googolplex, then by the number of atoms in the universe, you’d be within a few orders of magnitude.
And now for the meaty response.
You’re making the whole case extremely arbitrary and ignoring utility metrics, which I will now attempt to demonstrate.
Eliezer chose the number 3^^^3 so that no calculation of the disutility of the torture could ever match it, even if you have deontological qualms about torture (which most humans do). It simply doesn’t compare. Utilitarianism in the real world doesn’t work on fringe cases because utility can’t actually be measured. But if you could measure it, then you’d always pick the slightly higher value, every single time. In your example,
We breath a huge sigh of relief—you don’t have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he’s changing the deal—he’s going to be putting dust speck in YOUR eye, as well.
you ignore that part of my utility function that includes selflessness. Sacrificing something that means little to me for sparing intense suffering by someone else leads to positive utility for me, and I’m assuming other people. (This interestingly also invalidates the example you gave earlier where you polled the 3^^^3 people asking what they wanted—you ignored altruism in the calculation).
Your problems with the Torture vs. Dust Specks dilemma all boil down to “Here’s how the decision changes if I change the parameters of the problem!” (and that doesn’t even work in most of your examples).
Here’s the real problem underlying the equation, and invulnerable to nitpicks:
Omega comes to you and says “I will create 3^^^3 units of disutility, or disutility equal or lesser to the destruction of a single galaxy full of sentient life. Which do you choose?”
As has been said before,
I think the answer is obvious.
I’m not entirely convinced by the rest of your argument, but
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Is, far and away, the most intelligent thing I have ever seen anyone write on this damn paradox.
Come on, people. The fact that naive preference utilitarianism gives us torture rather than dust specks is not some result we have to live with, it’s an indication that the decision theory is horribly, horribly wrong,
It is beyond me how people can look at dust specks and torture and draw the conclusion they do. In my mind, the most obvious, immediate objection is that utility does not aggregate additively across people in any reasonable ethical system. This is true no matter how big the numbers are. Instead it aggregates by minimum, or maybe multiplicatively (especially if we normalize everyone’s utility function to [0,1]).
Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It’s the epitome of Spock “rationality”.
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
I would say, instead, that it gives a valid total-suffering value but that said value is not necessarily what is important. It is not how I extrapolate my intuitive aversion to suffering, for example.
Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It’s the epitome of Spock “rationality”.
I would say the same but substitute ‘torture’ for ‘reprehensible’. Using math in that way is essentially begging the question—the important decision is in which math to choose as a guess at our utility function after all. But at the same time I don’t consider choosing torture to be reprehensible. Because the fact that there are 3^^3 dust specks really does matter.
“Would you rather experience the mild distraction of a dust speck in your eye, or allow someone else to be tortured for fifty years?”
“Would you rather be tortured for fifty years, or have someone else experience the mild discomfort of a dust speck in their eye?”
Asking this question to (let’s say) humans will cause them to believe that only one person is getting the dust speck in the eye. Of course they’re going to come up with the wrong answer if they have incomplete information.
Now you have to make your decisions knowing that the entire universe totally wouldn’t mind having dust specks in exchange for preventing suffering for one other person. If that doesn’t change your decision … something is wrong.
There are two problems with this. The first is that if you take a number of people as big as 3^^^3 and ask them all this question, an incomprehensibly huge number will prefer to torture the other guy. These people will be insane, demented, cruel, or dreaming or whatever, but according to your ethics they must be taken into account. (And according to mine, as well, actually). The number of people saying to torture the guy will be greater than the number of Planck lengths in the observable universe. That alone is enough disutility to say “Torture away!”
The other problem is that you assert that “something is wrong” when my decision remains the same after a wrong question is asked with incomplete information 3^^^3 times does not change my decision. What is wrong? I can tell you that your intuition that “something must be wrong” is just incorrect. Nothing is wrong with the decision. (And this paragraph is for the LCPW where everyone answered selflessly to the question, which is of course not even remotely plausible).
Excellent points. I now seek your consistency to test your beliefs. Prepare yourself to hear a sick and twisted problem.
Omega has given you choice to allow or disallow 10 rapist to rape someone. Why 10 rapist? Omega knows the absolute utility across all humans, and unfortunately as terrible as it sounds, the suffering/torturing of 10 rapist not being able to rape is more suffering than what the victim feels. What do you do? 10 is < 3^^^3 suffering rapists. So lucky you, Omega need not burden you with the suffering of 3^^^3, if you chose to have rapist suffer. It is important that you not finagle your way out of the question. Please do not say that not being able to rape is not torture. Omega has already stated that indeed there is suffering for these rapist. It matters not if you would suffer such a thing.
Disclaimer: I am searching for the truth through rationality. I do not care whether the answer is torture, specks, or rape, only that it is the truth. If the rational answer is rape, I can do nothing but accept that for I am only in search of truth and not truth that fits me.
There are implications to choosing rape as the right answer. It means that in a rational society we must allow bad things to happen if that bad thing allows for total less suffering. We have to be consistent. Omega has given you a number of rapist far far far less than 3^^^3, surely you must allow for the rape to occur.
Literally, Grognor walks into a room with 10 rapists and a victim. The rapists tell him to “go away, and don’t call the cops.”. Omega appears and says, you may stop it if you want to, but I am all knowing and know that the utility experienced by the rapist or suffering from being deprived of raping is indeed greater than the suffering of the victim. What does Grognor do?
Excellent points. I now seek your consistency to test your beliefs. Prepare yourself to hear a sick and twisted problem.
The problem with your problem is that it is wrong. You have Omega asserting something we have good reason to disbelieve. You might as well have Omega come in and announce that there is an entity somewhere who will suffer dreadfully if we don’t start eating babies.
All you’re saying is “suppose were actually good”? Well, suppose away. So what?
Do you see the difference between your Omega and the one who poses Newcomb’s problem?
I sincerely appreciate your reply. Why do we accept Omega in Eleizers thought experiment and not mine? In the original some people claim to obviously pick torture, yet unwilling to pick rape because why? Well, like you said, you refuse to believe that rapist suffer. That is fair. But if that is fair, then Bob might refuse to believe that people with specks in their eyes suffer as well...
You can not assign rules for one and not the other.
All you’re saying is “suppose were actually good”? Well, suppose away. So what?
Not true. I am saying that some people get utility from evil. Not me, not you but why am I not allowed to use that as an example?
Bottom line is that I personaly am unresolved and I will remain unresolved rationally across all examples. I know what I would do. I would 3^^^3 pick dust and follow up with 3^^^3 deprived rapists. But for strong “torturers” such as Grognor, depriving rapists will be inconsistent with his beliefs.
Well, like you said, you refuse to believe that rapist suffer.
I also “refuse” to believe that the Earth is flat—or to put it more accurately, I assert that it is false.
That is fair. But if that is fair, then Bob might refuse to believe that people with specks in their eyes suffer as well...
The difference is that Bob would be wrong.
Not me, not you but why am I not allowed to use that as an example?
Making random shit up and saying “what if this?”, “what if that?” doesn’t make for a useful discussion.
Then again, I am not a utilitarian, so I have no problem with saying that the more someone wants to do an evil thing, the more they should be prevented from doing it.
There are two major problems with your proposition.
One is that Omega appears to be lying in this problem, very simply. In the universe where he isn’t lying, though...
I’m partly what you’d call a “negative utilitarian”. That’s minimize suffering first, then maximize joy. It does not appear to me that not being able to rape people for a small number of hedonists (like, say, the number of rapists on the planet) is greater than the suffering that would be inflicted if they had their way.
If you accept those premises I just put forward, then you understand that my choice is to stop the rapists for utilitarian reasons also because I don’t want them to do this again.
So okay, least-convenient possible world time. Given that they won’t cause any additional suffering after this incident, given that their suffering from not being able to commit rape is greater than the victim’s (why this would be true I have no idea), then sure, whatever, let them have their fun shortly before their logically ridiculous universe is destroyed because the consequences of this incident as interpreted by our universe would not occur.
I hope this justifies my position from a utilitarian standpoint, though I do have deontological concerns about rape. It’s one of those things that seems to Actually Be Unacceptable, but I hope I’ve put this intuition sufficiently aside to address your concerns.
One more thing… It kind of pisses me off that people still bring up the torture vs. dust specks thing. From where I stand, the debate is indisputably settled. But, ah, I guess you might call that “arrogance”. But whatever.
Then you are not consistent. For one example you are willing to allow suffering because the 50 years of torture is less than 3^^^3 dust holocaust yet. You claim that suffering is suffering. Yet only 10 deprived rapist already has you changing your thoughts.
I do not have an answer. If anything I would consider my self a weak dusk specker. The only thing that I claim is I am not arrogant, I am consistent in my stance. I do not know the answer but am willing to explore the dilemma of torture vs speck, and rape vs deprived rapists. Torture is rape is it not? Yet I will allow torture for 50 years because you do not believe that deprived rapist are not suffering. I am afraid that is not up to you to decide.
All I ask is to present tough questions. The down votes I believe are hurting discussion as I have never declared any thing controversial accept ask people to reconcile their beliefs to be consistent. I am actually quite disappointing in how easily people are frustrated. I apologize if I have pissed you off.
You must have missed the part of my response where I say that given your premises, yes, I choose to let the fucking rapists commit the crime. The rest of my post just details how your premises are wrong. I am internally consistent.
Your comment was saying that “if you change your answer here, it shows that you are not consistent.” I replied with reasons that this is not true, and you replied by continuing on the premise that it is true.
No! You do not get to decide whether I’m consistent!
See alsothis comment, which deserves a medal. Your problem is wrong, which is why you’re coming to this incorrect conclusion that I am inconsistent.
Thanks for your reply. You are right you are consistent as you did admit in your second scenario that you would let the sickos have their fun.
I would like to continue the discussion on why my problem is wrong in a friendly and respectable way, but the negative score points really are threatening my ability to post, which is quite unfortunate.
Having a utility function that allows an incomprehensible greater total suffering is a failure of epistemic, not instrumental, rationality. By choosing the dust specks, you implicitly assert that more suffering than has ever been known and ever will be known on Earth, times a hundred million billion trillion, is superior to a single torture victim.
This is probably the most patronizing thing I’ll ever say on this website, but: Think about that for a second.
I’m only pointing this out because no one else mentioned that instrumental rationality is independent of what your goals are, so that says nothing about whether your goals are simply incorrect. (You don’t really want the Dust Holocaust, do you? to allow that much suffering in exchange for, comparatively, nothing?)
I haven’t read ALL the discussion about this issue, but that’s because it’s so damned simple if you seek to feel fully the implications of that number, that number that transcends any attempt to even describe.
Edited to add: here Pfft said to just take goals and make them axiomatic. That is literally impossible. You have to have a reason for every one of your goals, and all the better to have motivations based in fact, not fancy.
I think that the Torture versus Dust Specks “paradox” was invented to show how utilitarianism (or whatever we’re calling it) can lead to on-face preposterous conclusions whenever the utility numbers get big enough. And I think that the intent was for everybody to accept this, and shut up and calculate.
However, for me, and I suspect some others, Torture versus Dust Specks and also Pascal’s Mugging have implied something rather different: that utilitarianism (or whatever we’re calling it) doesn’t work correctly when the numbers get too big.
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Let’s consider a Modified Torture versus Specks scenario: You are given the same choice as in the canonical problem, except you are also given the opportunity to collect polling data from every single one of the 3^^^3 individuals before you make your decision. You formulate the following queries:
“Would you rather experience the mild distraction of a dust speck in your eye, or allow someone else to be tortured for fifty years?”
“Would you rather be tortured for fifty years, or have someone else experience the mild discomfort of a dust speck in their eye?”
You do not mention, in either query, that you are being faced by the Torture versus Specks dilemma. You are only allowing the 3^^^3 to consider themselves and one hypothetical other.
You get the polling results back instantly. (Let’s make things simple and assume we live in a universe without clinical psychopathy.) The vast majority of respondents have chosen the “obviously correct” option.
Now you have to make your decisions knowing that the entire universe totally wouldn’t mind having dust specks in exchange for preventing suffering for one other person. If that doesn’t change your decision … something is wrong. I’m not saying something is wrong with the decision so much as something is wrong with your decision theory.
I don’t think this works. Change it to:
“Would you rather be tortured for a week, or have someone else be tortured for 100 years?”
“Would you rather be tortured for 100 years, or have someone else be tortured for a week?”
The popular opinion would most likely be one week in both cases, which by this logic would lead to 3^^^3 people being tortured for a week. Utilitarianism definitely does not lead to this conclusion, so the query is not equivalent to the original question.
But they’re not taking a dust-speck to prevent torture—they’re taking a dust-speck to prevent torture and cause the dust-speck holocaust. If you drop relevant information, of course you get different answers; I see no reason your representation here is more essentially accurate, and some reason it might be less.
Sure, and that is intentional. You wouldn’t bother polling the universe to determine their answer to the same paradox you’re solving.
You can look at it this way. Each person who responds to your poll is basically telling you: “Do not factor me, personally, into your utility calculation.” It is equivalent to opting out of the equation. “Don’t you dare torture someone on my behalf!” The “dust-speck holocaust” then disappears!
Imagine this: You send everyone a little message that says, “Warning! You are about to get an annoying dust speck in your eye. But, partially due to this sacrifice of your comfort, someone else will be spared horrible torture.” Would/should they care that the degree to which they contributed to saving someone from torture is infinitesimal?
Let’s go on to pretend that we asked Omega to calculate exactly how many humans with dust-specks is equivalent to one person being tortured for fifty years. Let’s pretend this number comes out to 1x10^14 people. Turns out it was much smaller than 3^^^3. Omega gives us all this information and then tells us he’s only going to give dust specks to 1x10^14 minus one people. We breath a huge sigh of relief—you don’t have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he’s changing the deal—he’s going to be putting dust speck in YOUR eye, as well.
Deciding, at this point, that you now have to torture somebody is equivalent to denying that you have the choice to say, “I can ignore this dust speck if it means torturing somebody.” Bearing in mind that you are choosing to put dust specks in an exactly equal number of eyes as you were before, plus only your own.
My example above is merely extrapolating this case to the case where each individual can decide to opt out.
But that’s not what a vote that way means; consider polling 100 individuals who are so noble as to pick 9 hours of torture over someone else getting 10. How many of them would pick torturing 99 other conditionally willing people over torturing one unwilling person? It is simply not the same question.
The correct response when Omega changes the deal is “Oh, come on! You’re making me decide between two situations that are literally within a dust speck’s worth of each other. Why bother me with such trivial questions?” Because that’s what it is. You’re not choosing between “dust speck in my eye” and “terrible thing happens”. You’re choosing between “terrible thing happens” and “infinitesimally less terrible thing happens, plus I have a dust speck in my eye.”
The first paragraph of this comment is a nitpick, but I felt impelled to it: there is no way that 10^14 dust specks is anywhere near enough to equal one torture victim. Maybe if you multiplied it by a googolplex, then by the number of atoms in the universe, you’d be within a few orders of magnitude.
And now for the meaty response.
You’re making the whole case extremely arbitrary and ignoring utility metrics, which I will now attempt to demonstrate.
Eliezer chose the number 3^^^3 so that no calculation of the disutility of the torture could ever match it, even if you have deontological qualms about torture (which most humans do). It simply doesn’t compare. Utilitarianism in the real world doesn’t work on fringe cases because utility can’t actually be measured. But if you could measure it, then you’d always pick the slightly higher value, every single time. In your example,
you ignore that part of my utility function that includes selflessness. Sacrificing something that means little to me for sparing intense suffering by someone else leads to positive utility for me, and I’m assuming other people. (This interestingly also invalidates the example you gave earlier where you polled the 3^^^3 people asking what they wanted—you ignored altruism in the calculation).
Your problems with the Torture vs. Dust Specks dilemma all boil down to “Here’s how the decision changes if I change the parameters of the problem!” (and that doesn’t even work in most of your examples).
Here’s the real problem underlying the equation, and invulnerable to nitpicks:
Omega comes to you and says “I will create 3^^^3 units of disutility, or disutility equal or lesser to the destruction of a single galaxy full of sentient life. Which do you choose?”
As has been said before, I think the answer is obvious.
I’m not entirely convinced by the rest of your argument, but
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Is, far and away, the most intelligent thing I have ever seen anyone write on this damn paradox.
Come on, people. The fact that naive preference utilitarianism gives us torture rather than dust specks is not some result we have to live with, it’s an indication that the decision theory is horribly, horribly wrong,
It is beyond me how people can look at dust specks and torture and draw the conclusion they do. In my mind, the most obvious, immediate objection is that utility does not aggregate additively across people in any reasonable ethical system. This is true no matter how big the numbers are. Instead it aggregates by minimum, or maybe multiplicatively (especially if we normalize everyone’s utility function to [0,1]).
Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It’s the epitome of Spock “rationality”.
I would say, instead, that it gives a valid total-suffering value but that said value is not necessarily what is important. It is not how I extrapolate my intuitive aversion to suffering, for example.
I would say the same but substitute ‘torture’ for ‘reprehensible’. Using math in that way is essentially begging the question—the important decision is in which math to choose as a guess at our utility function after all. But at the same time I don’t consider choosing torture to be reprehensible. Because the fact that there are 3^^3 dust specks really does matter.
Asking this question to (let’s say) humans will cause them to believe that only one person is getting the dust speck in the eye. Of course they’re going to come up with the wrong answer if they have incomplete information.
There are two problems with this. The first is that if you take a number of people as big as 3^^^3 and ask them all this question, an incomprehensibly huge number will prefer to torture the other guy. These people will be insane, demented, cruel, or dreaming or whatever, but according to your ethics they must be taken into account. (And according to mine, as well, actually). The number of people saying to torture the guy will be greater than the number of Planck lengths in the observable universe. That alone is enough disutility to say “Torture away!”
The other problem is that you assert that “something is wrong” when my decision remains the same after a wrong question is asked with incomplete information 3^^^3 times does not change my decision. What is wrong? I can tell you that your intuition that “something must be wrong” is just incorrect. Nothing is wrong with the decision. (And this paragraph is for the LCPW where everyone answered selflessly to the question, which is of course not even remotely plausible).
That leads to an infinite regress. What is a reason for a goal, but another goal?
Excellent points. I now seek your consistency to test your beliefs. Prepare yourself to hear a sick and twisted problem.
Omega has given you choice to allow or disallow 10 rapist to rape someone. Why 10 rapist? Omega knows the absolute utility across all humans, and unfortunately as terrible as it sounds, the suffering/torturing of 10 rapist not being able to rape is more suffering than what the victim feels. What do you do? 10 is < 3^^^3 suffering rapists. So lucky you, Omega need not burden you with the suffering of 3^^^3, if you chose to have rapist suffer. It is important that you not finagle your way out of the question. Please do not say that not being able to rape is not torture. Omega has already stated that indeed there is suffering for these rapist. It matters not if you would suffer such a thing.
Disclaimer: I am searching for the truth through rationality. I do not care whether the answer is torture, specks, or rape, only that it is the truth. If the rational answer is rape, I can do nothing but accept that for I am only in search of truth and not truth that fits me.
There are implications to choosing rape as the right answer. It means that in a rational society we must allow bad things to happen if that bad thing allows for total less suffering. We have to be consistent. Omega has given you a number of rapist far far far less than 3^^^3, surely you must allow for the rape to occur.
Literally, Grognor walks into a room with 10 rapists and a victim. The rapists tell him to “go away, and don’t call the cops.”. Omega appears and says, you may stop it if you want to, but I am all knowing and know that the utility experienced by the rapist or suffering from being deprived of raping is indeed greater than the suffering of the victim. What does Grognor do?
The problem with your problem is that it is wrong. You have Omega asserting something we have good reason to disbelieve. You might as well have Omega come in and announce that there is an entity somewhere who will suffer dreadfully if we don’t start eating babies.
All you’re saying is “suppose were actually good”? Well, suppose away. So what?
Do you see the difference between your Omega and the one who poses Newcomb’s problem?
Richard
I sincerely appreciate your reply. Why do we accept Omega in Eleizers thought experiment and not mine? In the original some people claim to obviously pick torture, yet unwilling to pick rape because why? Well, like you said, you refuse to believe that rapist suffer. That is fair. But if that is fair, then Bob might refuse to believe that people with specks in their eyes suffer as well...
You can not assign rules for one and not the other.
Not true. I am saying that some people get utility from evil. Not me, not you but why am I not allowed to use that as an example?
Bottom line is that I personaly am unresolved and I will remain unresolved rationally across all examples. I know what I would do. I would 3^^^3 pick dust and follow up with 3^^^3 deprived rapists. But for strong “torturers” such as Grognor, depriving rapists will be inconsistent with his beliefs.
I also “refuse” to believe that the Earth is flat—or to put it more accurately, I assert that it is false.
The difference is that Bob would be wrong.
Making random shit up and saying “what if this?”, “what if that?” doesn’t make for a useful discussion.
Then again, I am not a utilitarian, so I have no problem with saying that the more someone wants to do an evil thing, the more they should be prevented from doing it.
There are two major problems with your proposition.
One is that Omega appears to be lying in this problem, very simply. In the universe where he isn’t lying, though...
I’m partly what you’d call a “negative utilitarian”. That’s minimize suffering first, then maximize joy. It does not appear to me that not being able to rape people for a small number of hedonists (like, say, the number of rapists on the planet) is greater than the suffering that would be inflicted if they had their way.
If you accept those premises I just put forward, then you understand that my choice is to stop the rapists for utilitarian reasons also because I don’t want them to do this again.
So okay, least-convenient possible world time. Given that they won’t cause any additional suffering after this incident, given that their suffering from not being able to commit rape is greater than the victim’s (why this would be true I have no idea), then sure, whatever, let them have their fun shortly before their logically ridiculous universe is destroyed because the consequences of this incident as interpreted by our universe would not occur.
I hope this justifies my position from a utilitarian standpoint, though I do have deontological concerns about rape. It’s one of those things that seems to Actually Be Unacceptable, but I hope I’ve put this intuition sufficiently aside to address your concerns.
One more thing… It kind of pisses me off that people still bring up the torture vs. dust specks thing. From where I stand, the debate is indisputably settled. But, ah, I guess you might call that “arrogance”. But whatever.
Then you are not consistent. For one example you are willing to allow suffering because the 50 years of torture is less than 3^^^3 dust holocaust yet. You claim that suffering is suffering. Yet only 10 deprived rapist already has you changing your thoughts.
I do not have an answer. If anything I would consider my self a weak dusk specker. The only thing that I claim is I am not arrogant, I am consistent in my stance. I do not know the answer but am willing to explore the dilemma of torture vs speck, and rape vs deprived rapists. Torture is rape is it not? Yet I will allow torture for 50 years because you do not believe that deprived rapist are not suffering. I am afraid that is not up to you to decide.
All I ask is to present tough questions. The down votes I believe are hurting discussion as I have never declared any thing controversial accept ask people to reconcile their beliefs to be consistent. I am actually quite disappointing in how easily people are frustrated. I apologize if I have pissed you off.
You must have missed the part of my response where I say that given your premises, yes, I choose to let the fucking rapists commit the crime. The rest of my post just details how your premises are wrong. I am internally consistent.
Your comment was saying that “if you change your answer here, it shows that you are not consistent.” I replied with reasons that this is not true, and you replied by continuing on the premise that it is true.
No! You do not get to decide whether I’m consistent!
See also this comment, which deserves a medal. Your problem is wrong, which is why you’re coming to this incorrect conclusion that I am inconsistent.
Grognor,
Thanks for your reply. You are right you are consistent as you did admit in your second scenario that you would let the sickos have their fun.
I would like to continue the discussion on why my problem is wrong in a friendly and respectable way, but the negative score points really are threatening my ability to post, which is quite unfortunate.