As others have said, the scenario doesn’t require linearity.
You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original. Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point. If you want to say that the assumptions of the dust speck dilemma are unrealistic, you are free to do it (although such a statement is rather trivial; nobody believes that there are 3^^^^3 humans in the world). If you, on the other hand, object to the utilitarian principles involved in the answer, then do it. But please don’t mix these two types of objections together.
There were already many people who espoused choosing “specks”, rationalising it by all sorts of elaborate arguments (not a surprising thing to see, since “specks” is the intuitive answer). This is the easy part. But I haven’t seen anybody propose a coherent general decision algorithm which returns “specks” for this dilemma and doesn’t return repugnant or even paradoxical answers to different questions. This is the hard part, which if you engaged, it would be much more interesting.
You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original. Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point.
This seems to be endemic in the discussion section, as of late.
You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original.
By what means do you justify this assertion? Actually, there’s two. Please explain your reasoning for both:
The notion that I am rejecting the thought experiment at all.
That I do so by means of “issues that are stipulated to be missing in the original”.
Insofar as I can determine, both of these are simply false.
Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point.
What about my argument makes you believe that my rejections are based on finding things repugnant as opposed to rejections on purely utilitarian grounds?
If you, on the other hand, object to the utilitarian principles involved in the answer, then do it.
I am confused as to why you would believe that I was objecting to utilitarian principles when my argument depends upon consequential utilitarianism.
This is the hard part, which if you engaged, it would be much more interesting.
The original thought experiment presents you with a choice between X: one person will suffer horribly for 50 years, and Y: 3^^^3 people will experience minimal inconvenience for a second. The point clearly was to compare the utilities of X and Y, so it is assumed that all other things are equal.
You have said that you choose Y, because you “cannot accept the culture/society that would permit such a torture to exist”. But the society would not be changed in the original experiment (assume, for example, that nobody except you would know about the tortured person). You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone.
So, to explicitly reply to your questions, (1) you reject the original problem whether u(Y) > u(X), because you answer a different question, namely whether u(Y) > u(X and Z), and (2) the issue missing in the original is Z.
What about my argument makes you believe that my rejections are based on finding things repugnant as opposed to rejections on purely utilitarian grounds?
Nothing. I have only said that you do the same thing what others do in similar situation.
(In order not to be evasive, I admit believing that you reject the “torture” conclusion intuitively and then rationalise it. But this belief is based purely on the fact that this is what most people do; there is nothing in your arguments (apart from them being unconvincing) that further supports this belief. Now, do you admit that the “torture” variant is repugnant to you?)
I am confused as to why you would believe that I was objecting to utilitarian principles when my argument depends upon consequential utilitarianism.
This is partly due to my bad formulation (I should have probably said “calculations” instead of “principles”), and partly due to the fact that it is not so clear from your post what your argument depends upon.
The point clearly was to compare the utilities of X and Y, so it is assumed that all other things are equal.
You have said that you choose Y, [...] But the society would not be changed in the original experiment (assume, for example, that nobody except you would know about the tortured person).
This privileges the hypothesis. You’re claiming that there will be no secondary consequences and therefore secondary consequences need not be considered. This is directly antithetical to the notion of treating these questions in an “all other things being equal” state: of course if you arbitrarily eliminate the potential results of decision X as compared to decision Y, that’s going to affect the outcome of which decision is preferable. But that, then, isn’t answering the question asked of us. THAT question is asked agnostic of the conditions in which it would be implemented. So we don’t get to impose special conditions on how it would occur. Indeed, rather than me adding things the original hypothesis excludes, it seems to me that you are doing the exact opposite of this: you are excluding things the original hypothesis does not.
In other words; to my current understanding of that hypothetical, I am the one closest to answering it without imposed additional conditions.
You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone.
I see. There is an error in your reasoning here, but I can understand why it would be non-obvious. You are assuming that u(n) != n + Z(n) in my formulation. The reason why this would be non-obvious is because I listed no value for Z(Y). The reason why I did not list such a value is because I am not at this time aware that said value is non-zero.So the equation remains a question of whether u(Y) is greater or lesser than u(X). The point we disagree on is not the hypothesis itself—the comparison of u(Y) to u(X), but rather the terms of the utility function.
In other words, exactly what I explicitly stated: I argue that the discussion on this topic thus far uses an insufficient definition of “utility”, especially for consequentialistic utilitarianism, and therefore “misses the point”.
(In order not to be evasive, I admit believing that you reject the “torture” conclusion intuitively and then rationalise it. But this belief is based purely on the fact that this is what most people do.
Fair enough. Thank you.
there is nothing in your arguments (apart from them being unconvincing) that further supports this belief.
I find no reason to accept the notion that my arguments are unconvincing. This, then, is the crux of the matter: What is your argument for supporting the notion that ONLY primary consequences are a valid form of consequences for a utilitarian to consider in making a decision?
Now, do you admit that the “torture” variant is repugnant to you?
Not at all. I have addressed this purely in terms of quantity. My argument is phrased in terms of utilon quantity. I reject the condonement of torture because of the utilitarian consequences of accepting it. (If it’s any help, please be aware that I am a diagnosed autist, so my empathy to others is primarily intellectual in others. I am fully able to compartmentalize that trait when useful to dialogue.)
Examples?
Of what?
“But I haven’t seen anybody propose a coherent general decision algorithm which returns “specks” for this dilemma and doesn’t return repugnant or even paradoxical answers to different questions.”
This privileges the hypothesis. You’re claiming that there will be no secondary consequences and therefore secondary consequences need not be considered. This is directly antithetical to the notion of treating these questions in an “all other things being equal” state.
What? Which hypothesis do I privilege? How does assuming no secondary consequences of either variant contradict treating the other things as being equal?
There is an error in your reasoning here, but I can understand why it would be non-obvious. You are assuming that u(n) != n + Z(n) in my formulation. …
If n refers to either X or Y, I certainly don’t assume that u(n) != n + Z(n), because such a thing has no sensible interpretation (“u(X) = X” would read “utility of torture is equal to torture”). If n refers to number of people dust-specked or some other quantity, I still have no idea what you mean by Z(n). In my notation, Z was not a function, but a change of state of the world (namely, that society begins tolerating torture). So, maybe there is an error in my reasoning, but certainly you are not understanding my reasoning correctly.
As for your demanded examples, I am still not sure what do you want me to write.
Edit: seems to me that I made the same reply as paper-machine, even accidentally using the same symbols X, Y and Z, but in his use these are already utilities, while in my use they are situations. So, paper-machine.X = prase.u(X).
How does assuming no secondary consequences of either variant contradict treating the other thing as being equal?
Because, in order to achieve that state, you must impose special conditions on the implementation of the hypothetical. Ones the hypothetical itself is agnostic to. The only way to eliminate secondary consequences from consideration, in other words, is to treat the hypotheticals unequally.
I also began by stating, if you’ll recall, that if you do so isolate the query to first-consequences only, all that you practically achieve is a comparison of the net total quantity of suffering directly imposed by the two scenarios. And all that achieves is to suss out whether your view of suffering is linear or logarithmic in nature. To the logarithmic-adherent, the torture scenario is an effectively infinite suffering. I don’t know if you’ve ever tortured or been tortured, but I can assure you that fifty years is far more than is necessary for a single person’s psyche to be irrevocably demolished, reconstructed, and demolished repeatedly. Eliezer’s original discussion of said torture evinced, quite clearly, that he adheres to the linear-additive perspective. This is perfectly clear when he says that it “isn’t the worst thing that could happen to a person”.
If n refers to either X or Y, I certainly don’t assume that u(n) != n + Z(n), because such a thing has no sensible interpretation (“u(X) = X” would read “utility of torture is equal to torture”).
Alright, fine. u(n) = s(n) + Z(n), where u(n) is the total anti-utility of scenario n, s(n) is the suffering directly induced by scenario n, and Z(n) is the anti-utility of all secondary consequences of scenario n*.
If n refers to number of people dust-specked or some other quantity, I still have no idea what you mean by Z(n). In my notation, Z was not a function, but a change of state of the world (namely, that society begins tolerating torture).
Z is the function for determining the secondary consequences of scenario n. It has a specific value depending on the scenario chosen.
but certainly you are not understanding my reasoning correctly.
Where am I mistaken? What am I mistaking you on?
As for your demanded examples, I am still not sure what do you want me to write.
… Why would you declare a topic that you are unable to even describe interesting? You are the one who brought it up… provide examples of scenarios that fulfill your description.
If you want to discuss the topic, if you find it interesting—discuss it! I opened the floor to it.
I will not reply to the first paragraph, because we clearly disagree about what “ceteris paribus” means, while this disagreement has little to no relevance to the original problem.
effectively infinite
If it is finite, the logic behind choosing torture works. If it is infinite, you have other problems. But you can’t have it both ways.
Where am I mistaken? What am I mistaking you on?
You have said “[y]ou are assuming that u(n) != s(n) + Z(n) in my formulation”, I had been assuming no such thing.
Why would you declare a topic that you are unable to even describe interesting? You are the one who brought it up… provide examples of scenarios that fulfill your description.
Recall that you are probably reacting to this:
I haven’t seen anybody propose a coherent general decision algorithm which returns “specks” for this dilemma and doesn’t return repugnant or even paradoxical answers to different questions. This is the hard part, which if you engaged, it would be much more interesting.
No mention of any scenarios. If you want me to describe a consistent decision theory which returns “specks” and has no other obvious downsided, well, I can’t, because I have none. Neither I believe that such a theory exists. You believe that “specks” is the correct solution.
I will not reply to the first paragraph, because we clearly disagree about what “ceteris paribus” means, while this disagreement has little to no relevance to the original problem.
If you are not stipulating the relevance of secondary consequences to the original hypothesis then this conversation is at an end, with this statement. Either they are relevant, as is my entire argument, or they are not. Claiming via fiat that they are not will earn you no esteem on my part, and will cause me to consider your position entirely without merit of any kind; it is the ultimate in dishonest argumentation tactics: “You are wrong because I say you are wrong.”
If it is finite, the logic behind choosing torture works.
Rephrase this. As I currently read it, you are stating that “if torture is infinite suffering, then torture is the better thing to be chosen.” That is contradictory.
If it is infinite, you have other problems. But you can’t have it both ways.
Not at all. As I have stated iteratively, suffering is not the sole relevant form of utility. Determining how to properly weight the various forms of utility against one another is necessary to untangling this. It is not at all obvious that they even can be so weighted.
You have said “[y]ou are assuming that u(n) != s(n) + Z(n) in my formulation”, I had been assuming no such thing.
If that were the case then you really shouldn’t have said this: “You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone.”
Because now we are let with two contradictory statements uttered by you. Either Z(n) is a part of the function of u(n), or it is not. These are mutually exclusive. You cannot have both.
So, which statement of yours, then, is the false one?
No mention of any scenarios.
“repugnant or even paradoxical answers to different questions.” <-- A rose, sir, by any other name.
I do not know why you seem to find it necessary to insist that things you have said aren’t in fact things you have said; I do not know why you seem to find it necessary to adhere to such rigid verbiage usage that synonymous terminology for things you have said are rejected as non-existent statements by yourself.
It is, however, a frustrating pattern, and is causing me to lose interest in this dialogue.
It is, however, a frustrating pattern, and is causing me to lose interest in this dialogue.
Ending the dialogue may probably be the best option. I am only going to provide you one example of paradoxes you have demanded, since it was probably my fault that I haven’t understood your request. (Next time I exhibit similar lack of understanding, please tell me plainly and directly what you are asking for. Beware illusion of transparency. I really have no dark motives to pretend misunderstanding when there is none.)
So, the most basic problem with choosing “specks” over “torture” is that which is already described in the original post: torturing 1 person for 50 years (let’s call that scenario X(0)) is clearly better than torturing 10 people for 50 years minus 1 second (X(1)); to deny that means that one is willing to subject 9 people to 50 years of agony just to spare 1 person one second of agony. X(1) is then better than torturing 100 people for 50 years minus 2 seconds (X(2)) and so on. There are about 1,5 billion seconds in 50 years, so let’s define X(n) recursively as torturing ten times more people than in scenario X(n-1) for time equal to 1,499,999,999⁄1,500,000,000 of time used in scenario X(n-1). Let’s also decrease the pain slightly in each step: since pain is difficult to measure, let’s precisely define the way torture is done: by simulating the pain one feels when the skin is burned by hot iron on p percent of body surface; at X(0) we start with burning the whole surface and p is decreased in each step by the same factor as the duration of torture. At approximately n = 3.8 * 10^10, X(n) means taking 10^(3.8*10^10) people and touching their skin with a hot needle for 1⁄100 of a second (the tip of the needle which comes into contact with the skin will have 0.0001 square milimeters). Now this is so negligible pain that a dust speck in the eye is clearly worse.
So, we have X(3.8*10^10) which is better than dust specks with just 10^(3.8*10^10) people (a number much lower than 3^^^3), and you say that dust specks are better than X(0). Therefore there must be at least one n such that X(n) is strictly worse than X(n+1). Now this seems paradoxical, since going from X(n) to X(n+1) means reducing the amount of suffering of those who already suffer by a tiny amount, roughly one billionth, for the price of adding nine new sufferers for each existing one.
(Please note that this reasoning doesn’t assume anything about utility functions—it uses only preference ordering—nor it assumes anything about direct or indirect consequences of torture.)
That is counter-intuitive, but isn’t the anti-torture answer something analogous to sets? That is:
R(0) is the set of all real numbers. We know that it is an uncountable infinity, and therefore larger than any countable infinity. Set R(n) is R(0) with n elements removed. As I understand it, so long as n is a countable infinity or smaller, R(n) is equal in size to R(0). [EDITED TO REMOVE INCORRECT MATH.]
To cash out the analogy, it might be that certain torture scenarios are preferable to other torture scenarios, but all non-torture scenarios are less bad than all torture scenarios. As you increment down the amount of suffering in your example, you eventually remove so much that the scenario is no longer torture. In notation somewhat like yours, Y(50 yr) is the badness of imposing pain as you describe to one person for 50 years. We all seem to agree that Y(50 yr) is torture. I assert something like Y(50 yr—A) is torture if Y(A) would not be torture.
I agree that you can’t say that suffering is non-linear (that is, think that dust-specks is preferable to torture) without believing something like what I laid out.
Logos, those “secondary” effects you point to are the properties that make Y(A) torture (or not).
This is consistent. But it induces further difficulties in the standard utilitarian decision process.
To express the idea that all non-torture scenarios are less bad than all torture scenarios by utility function, there must be some (negative) boundary B between the two sets of scenarios, such that u(any torture scenario) < B and u(any non-torture scenario) > B. Now either B is finite or it is infinite; this matters when probabilities come into play.
First consider the case of B finite. This is the logistic curve approach: it means, that any number of slightly super-boundary inconveniences happening to different people are preferable to a single case of a slightly sub-boundary torture. I know of no natural physiological boundary of such sort; if severity of pain can change continuously, which seems to be the case, the sub-boundary and super-boundary experiences may be effectively indistinguishable. Are you willing to accept this?
Perhaps you are. Now this gets an interesting turn. Consider a couple of scenarios: X, which is slightly sub-boundary (thus “torture”) with utility B - ε (ε positive), and Y, which is non-torture with u(Y) = B + ε. Now utilities may behave non-linearly with respect to the scenario-describing parameters, but expected utilities have to be pretty linear with respect to probabilities; anything else means throwing utilitarianism out from the window. A utility maximiser should therefore be indifferent between scenarios X’ and Y’, where X’ = X with probability p and Y’ = Y with probability p (B - ε) / (B + ε).
Lets say one of the boundary cases is, for sake of concreteness, giving a person 7.5 seconds long electric shock of a given strength. So, you may prefer to give a billion people 7.4999 s shock in order to avoid one person getting a 7.5001 s shock, but in the same time you would prefer, say, 99.98% chance of one person getting 7.5001 s shock to 99.99% chance of one person getting 7.4999 s shock. Thus, although the torture/non-torture boundary seems strict, it can be easily crossed when uncertainty is taken into account.
(This problem can be alleviated by postulating a gap in utilities between the worst non-torture scenario and the best torture scenario.)
If it still doesn’t sound enough crazy, note the fact that if there already are people experiencing an almost boundary (but still non-torturous) scenario, decisions over completely unrelated options get distorted, since your utility can’t fall lower than B, where it already sits. Assume that one has presently utility near B (which must be achievable by adjusting the number of almost tortured people and severity of their inconvenience—which is nevertheless still not torture, nobody is tortured as far as you know—let’s call this adjustment A). Consider now decisions about money. If W is one’s total wealth, then u(W,A) must be convex with respect to W if it’s value is not much different from B, since no everywhere concave function can be bounded from below. Now, this may invert the usual risk aversion due to diminishing marginal utilities! (Even assuming that you can do literally nothing to change A).
(This isn’t alleviated by a utility gap between torture and non-torture.)
Now, consider the second case, B = -∞. Then there is another problem: torture becomes the sole concern of one’s decisions. Even if p(torture) = 1/3^^^3, the expected utility is negative infinity and all non-torturous concerns become strictly irrelevant. One can formulate it mathematically as having a 2-dimensional vector (u1,u2) representing the utility. The first component u1 is the measure of utility from torture and u2 measures the other utility. Now since you have decided to never trade torture for non-torture, you should choose the variant whose expected u1 is greater; only when u1(X) and u1(Y) are strictly equal, whether u2(X) > u2(Y) becomes important. Therefore you would find yourself asking questions like “if I buy this banana, would it increase the chance of people getting tortured?”. I don’t think you are striving to consistently apply this decision theory.
(This is related to distinction between sacred and unsacred values, which is a fairly standard source of inconsistencies in intuitive decisions.)
Your reference to sacred values reminded me of Spheres of Justice. In brief, Walzer argues that the best way of describing our morality is by noting which values may not be exchanged for which other values. For example, it is illicit to trade material wealth for political power over others (i.e. bribery is bad). Or trade lives for relief from suffering. But it is permissible to trade within a sphere (money for ice cream) or between some spheres (dowries might be a historical example, but I can’t think of a modern one just this moment).
It seems like your post is a mathematical demonstration that I cannot believe the Spheres of Justice argument and also be a utilitarian. Hadn’t thought about it that way before.
I hear your general point, and I don’t dispute it.
But I think your set theory analogy isn’t quite right. Consider the set R - [0,1] That’s all real numbers less than 0 or greater than 1. This is still uncountably infinite, and has equal cardinality to R, even though I removed the set [0,1], which is itself uncountably infinite.
X(0)) is a smaller value of anti-utility than X(1)), absolutely. I do not, however, know that the decrease of one second is non-negligible for that measurement of anti-utility, under the definitions I have provided.
There are about 1,5 billion seconds in 50 years, so let’s define X(n) recursively as torturing ten times more people than in scenario X(n-1) for time equal to 1,499,999,999⁄1,500,000,000 of time used in scenario X(n-1).
That math gets ugly to try to conceptualize (fractional values of fractional values), but I can appreciate the intention.
since pain is difficult to measure, let’s precisely define the way torture is done
This is a non-trivial alteration to the argument, but I will stipulate it for the time being.
At approximately n = 3.8 10^10, X(n) means taking 10^(3.810^10) people and touching their skin with a hot needle for 1⁄100 of a second (the tip of the needle which comes into contact with the skin will have 0.0001 square milimeters). Now this is so negligible pain that a dust speck in the eye is clearly worse.
“Clearly”? I suffer from opacity you apparently lack; I cannot distinguish between the two.
Now this seems paradoxical, since going from X(n) to X(n+1) means reducing the amount of suffering of those who already suffer by a tiny amount, roughly one billionth, for the price of adding nine new sufferers for each existing one.
The paradox exists only if suffering is quantified linearly. If it is quantified logarithmically, a one-billionth shift on some position of the logarithmic scale is going to overwhelm the signal of the linearly-multiplicative increasing population of individuals. (Please note that this quantification is on a per-individual basis, which can once quantified be simply added.)
This is far from being a paradox: it is a natural and expected consequence.
“Clearly”? I suffer from opacity you apparently lack; I cannot distinguish between the two.
Then substitute “worse or equal” for “worse”, the argument remains.
I do not, however, know that the decrease of one second is non-negligible for that measurement of anti-utility, under the definitions I have provided.
Same thing, doesn’t matter whether it is or it isn’t. The only things which matters is that X(n) is preferable or equal to X(n+1), and that “specks” is worse or equal to X(3.8 * 10^10). If “specks” is also preferable to X(0), we have circular preferences.
If it is quantified logarithmically, a one-billionth shift on some position of the logarithmic scale is going to overwhelm the signal of the linearly-multiplicative increasing population of individuals.
So, you are saying that there indeed is n such that X(n) is worse than X(n+1); it means that there are t and p such that burning p percent of one person’s skin for t seconds is worse than 0.999999999 t seconds of burning 0.999999999 p percent of skins of ten people. Do I interpret it correctly?
Edited: “worse” substituted for “preferable” in the 2nd answer.
So, you are saying that there indeed is n such that X(n) is worse than X(n+1); it means that there are t and p such that burning p percent of one person’s skin for t seconds is worse than 0.999999999 t seconds of burning 0.999999999 p percent of skins of ten people. Do I interpret it correctly?
As others have said, the scenario doesn’t require linearity.
You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original. Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point. If you want to say that the assumptions of the dust speck dilemma are unrealistic, you are free to do it (although such a statement is rather trivial; nobody believes that there are 3^^^^3 humans in the world). If you, on the other hand, object to the utilitarian principles involved in the answer, then do it. But please don’t mix these two types of objections together.
There were already many people who espoused choosing “specks”, rationalising it by all sorts of elaborate arguments (not a surprising thing to see, since “specks” is the intuitive answer). This is the easy part. But I haven’t seen anybody propose a coherent general decision algorithm which returns “specks” for this dilemma and doesn’t return repugnant or even paradoxical answers to different questions. This is the hard part, which if you engaged, it would be much more interesting.
This seems to be endemic in the discussion section, as of late.
By what means do you justify this assertion? Actually, there’s two. Please explain your reasoning for both:
The notion that I am rejecting the thought experiment at all.
That I do so by means of “issues that are stipulated to be missing in the original”.
Insofar as I can determine, both of these are simply false.
What about my argument makes you believe that my rejections are based on finding things repugnant as opposed to rejections on purely utilitarian grounds?
I am confused as to why you would believe that I was objecting to utilitarian principles when my argument depends upon consequential utilitarianism.
Examples?
The original thought experiment presents you with a choice between X: one person will suffer horribly for 50 years, and Y: 3^^^3 people will experience minimal inconvenience for a second. The point clearly was to compare the utilities of X and Y, so it is assumed that all other things are equal.
You have said that you choose Y, because you “cannot accept the culture/society that would permit such a torture to exist”. But the society would not be changed in the original experiment (assume, for example, that nobody except you would know about the tortured person). You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone.
So, to explicitly reply to your questions, (1) you reject the original problem whether u(Y) > u(X), because you answer a different question, namely whether u(Y) > u(X and Z), and (2) the issue missing in the original is Z.
Nothing. I have only said that you do the same thing what others do in similar situation.
(In order not to be evasive, I admit believing that you reject the “torture” conclusion intuitively and then rationalise it. But this belief is based purely on the fact that this is what most people do; there is nothing in your arguments (apart from them being unconvincing) that further supports this belief. Now, do you admit that the “torture” variant is repugnant to you?)
This is partly due to my bad formulation (I should have probably said “calculations” instead of “principles”), and partly due to the fact that it is not so clear from your post what your argument depends upon.
Of what?
This privileges the hypothesis. You’re claiming that there will be no secondary consequences and therefore secondary consequences need not be considered. This is directly antithetical to the notion of treating these questions in an “all other things being equal” state: of course if you arbitrarily eliminate the potential results of decision X as compared to decision Y, that’s going to affect the outcome of which decision is preferable. But that, then, isn’t answering the question asked of us. THAT question is asked agnostic of the conditions in which it would be implemented. So we don’t get to impose special conditions on how it would occur. Indeed, rather than me adding things the original hypothesis excludes, it seems to me that you are doing the exact opposite of this: you are excluding things the original hypothesis does not.
In other words; to my current understanding of that hypothetical, I am the one closest to answering it without imposed additional conditions.
I see. There is an error in your reasoning here, but I can understand why it would be non-obvious. You are assuming that u(n) != n + Z(n) in my formulation. The reason why this would be non-obvious is because I listed no value for Z(Y). The reason why I did not list such a value is because I am not at this time aware that said value is non-zero.So the equation remains a question of whether u(Y) is greater or lesser than u(X). The point we disagree on is not the hypothesis itself—the comparison of u(Y) to u(X), but rather the terms of the utility function.
In other words, exactly what I explicitly stated: I argue that the discussion on this topic thus far uses an insufficient definition of “utility”, especially for consequentialistic utilitarianism, and therefore “misses the point”.
Fair enough. Thank you.
I find no reason to accept the notion that my arguments are unconvincing. This, then, is the crux of the matter: What is your argument for supporting the notion that ONLY primary consequences are a valid form of consequences for a utilitarian to consider in making a decision?
Not at all. I have addressed this purely in terms of quantity. My argument is phrased in terms of utilon quantity. I reject the condonement of torture because of the utilitarian consequences of accepting it. (If it’s any help, please be aware that I am a diagnosed autist, so my empathy to others is primarily intellectual in others. I am fully able to compartmentalize that trait when useful to dialogue.)
“But I haven’t seen anybody propose a coherent general decision algorithm which returns “specks” for this dilemma and doesn’t return repugnant or even paradoxical answers to different questions.”
What? Which hypothesis do I privilege? How does assuming no secondary consequences of either variant contradict treating the other things as being equal?
If n refers to either X or Y, I certainly don’t assume that u(n) != n + Z(n), because such a thing has no sensible interpretation (“u(X) = X” would read “utility of torture is equal to torture”). If n refers to number of people dust-specked or some other quantity, I still have no idea what you mean by Z(n). In my notation, Z was not a function, but a change of state of the world (namely, that society begins tolerating torture). So, maybe there is an error in my reasoning, but certainly you are not understanding my reasoning correctly.
As for your demanded examples, I am still not sure what do you want me to write.
Edit: seems to me that I made the same reply as paper-machine, even accidentally using the same symbols X, Y and Z, but in his use these are already utilities, while in my use they are situations. So, paper-machine.X = prase.u(X).
Because, in order to achieve that state, you must impose special conditions on the implementation of the hypothetical. Ones the hypothetical itself is agnostic to. The only way to eliminate secondary consequences from consideration, in other words, is to treat the hypotheticals unequally.
I also began by stating, if you’ll recall, that if you do so isolate the query to first-consequences only, all that you practically achieve is a comparison of the net total quantity of suffering directly imposed by the two scenarios. And all that achieves is to suss out whether your view of suffering is linear or logarithmic in nature. To the logarithmic-adherent, the torture scenario is an effectively infinite suffering. I don’t know if you’ve ever tortured or been tortured, but I can assure you that fifty years is far more than is necessary for a single person’s psyche to be irrevocably demolished, reconstructed, and demolished repeatedly. Eliezer’s original discussion of said torture evinced, quite clearly, that he adheres to the linear-additive perspective. This is perfectly clear when he says that it “isn’t the worst thing that could happen to a person”.
Alright, fine. u(n) = s(n) + Z(n), where u(n) is the total anti-utility of scenario n, s(n) is the suffering directly induced by scenario n, and Z(n) is the anti-utility of all secondary consequences of scenario n*.
Z is the function for determining the secondary consequences of scenario n. It has a specific value depending on the scenario chosen.
Where am I mistaken? What am I mistaking you on?
… Why would you declare a topic that you are unable to even describe interesting? You are the one who brought it up… provide examples of scenarios that fulfill your description.
If you want to discuss the topic, if you find it interesting—discuss it! I opened the floor to it.
I will not reply to the first paragraph, because we clearly disagree about what “ceteris paribus” means, while this disagreement has little to no relevance to the original problem.
If it is finite, the logic behind choosing torture works. If it is infinite, you have other problems. But you can’t have it both ways.
You have said “[y]ou are assuming that u(n) != s(n) + Z(n) in my formulation”, I had been assuming no such thing.
Recall that you are probably reacting to this:
No mention of any scenarios. If you want me to describe a consistent decision theory which returns “specks” and has no other obvious downsided, well, I can’t, because I have none. Neither I believe that such a theory exists. You believe that “specks” is the correct solution.
If you are not stipulating the relevance of secondary consequences to the original hypothesis then this conversation is at an end, with this statement. Either they are relevant, as is my entire argument, or they are not. Claiming via fiat that they are not will earn you no esteem on my part, and will cause me to consider your position entirely without merit of any kind; it is the ultimate in dishonest argumentation tactics: “You are wrong because I say you are wrong.”
Rephrase this. As I currently read it, you are stating that “if torture is infinite suffering, then torture is the better thing to be chosen.” That is contradictory.
Not at all. As I have stated iteratively, suffering is not the sole relevant form of utility. Determining how to properly weight the various forms of utility against one another is necessary to untangling this. It is not at all obvious that they even can be so weighted.
If that were the case then you really shouldn’t have said this: “You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone.”
Because now we are let with two contradictory statements uttered by you. Either Z(n) is a part of the function of u(n), or it is not. These are mutually exclusive. You cannot have both.
So, which statement of yours, then, is the false one?
“repugnant or even paradoxical answers to different questions.” <-- A rose, sir, by any other name.
I do not know why you seem to find it necessary to insist that things you have said aren’t in fact things you have said; I do not know why you seem to find it necessary to adhere to such rigid verbiage usage that synonymous terminology for things you have said are rejected as non-existent statements by yourself.
It is, however, a frustrating pattern, and is causing me to lose interest in this dialogue.
Ending the dialogue may probably be the best option. I am only going to provide you one example of paradoxes you have demanded, since it was probably my fault that I haven’t understood your request. (Next time I exhibit similar lack of understanding, please tell me plainly and directly what you are asking for. Beware illusion of transparency. I really have no dark motives to pretend misunderstanding when there is none.)
So, the most basic problem with choosing “specks” over “torture” is that which is already described in the original post: torturing 1 person for 50 years (let’s call that scenario X(0)) is clearly better than torturing 10 people for 50 years minus 1 second (X(1)); to deny that means that one is willing to subject 9 people to 50 years of agony just to spare 1 person one second of agony. X(1) is then better than torturing 100 people for 50 years minus 2 seconds (X(2)) and so on. There are about 1,5 billion seconds in 50 years, so let’s define X(n) recursively as torturing ten times more people than in scenario X(n-1) for time equal to 1,499,999,999⁄1,500,000,000 of time used in scenario X(n-1). Let’s also decrease the pain slightly in each step: since pain is difficult to measure, let’s precisely define the way torture is done: by simulating the pain one feels when the skin is burned by hot iron on p percent of body surface; at X(0) we start with burning the whole surface and p is decreased in each step by the same factor as the duration of torture. At approximately n = 3.8 * 10^10, X(n) means taking 10^(3.8*10^10) people and touching their skin with a hot needle for 1⁄100 of a second (the tip of the needle which comes into contact with the skin will have 0.0001 square milimeters). Now this is so negligible pain that a dust speck in the eye is clearly worse.
So, we have X(3.8*10^10) which is better than dust specks with just 10^(3.8*10^10) people (a number much lower than 3^^^3), and you say that dust specks are better than X(0). Therefore there must be at least one n such that X(n) is strictly worse than X(n+1). Now this seems paradoxical, since going from X(n) to X(n+1) means reducing the amount of suffering of those who already suffer by a tiny amount, roughly one billionth, for the price of adding nine new sufferers for each existing one.
(Please note that this reasoning doesn’t assume anything about utility functions—it uses only preference ordering—nor it assumes anything about direct or indirect consequences of torture.)
That is counter-intuitive, but isn’t the anti-torture answer something analogous to sets? That is:
R(0) is the set of all real numbers. We know that it is an uncountable infinity, and therefore larger than any countable infinity. Set R(n) is R(0) with n elements removed. As I understand it, so long as n is a countable infinity or smaller, R(n) is equal in size to R(0). [EDITED TO REMOVE INCORRECT MATH.]
To cash out the analogy, it might be that certain torture scenarios are preferable to other torture scenarios, but all non-torture scenarios are less bad than all torture scenarios. As you increment down the amount of suffering in your example, you eventually remove so much that the scenario is no longer torture. In notation somewhat like yours, Y(50 yr) is the badness of imposing pain as you describe to one person for 50 years. We all seem to agree that Y(50 yr) is torture. I assert something like Y(50 yr—A) is torture if Y(A) would not be torture.
I agree that you can’t say that suffering is non-linear (that is, think that dust-specks is preferable to torture) without believing something like what I laid out.
Logos, those “secondary” effects you point to are the properties that make Y(A) torture (or not).
This is consistent. But it induces further difficulties in the standard utilitarian decision process.
To express the idea that all non-torture scenarios are less bad than all torture scenarios by utility function, there must be some (negative) boundary B between the two sets of scenarios, such that u(any torture scenario) < B and u(any non-torture scenario) > B. Now either B is finite or it is infinite; this matters when probabilities come into play.
First consider the case of B finite. This is the logistic curve approach: it means, that any number of slightly super-boundary inconveniences happening to different people are preferable to a single case of a slightly sub-boundary torture. I know of no natural physiological boundary of such sort; if severity of pain can change continuously, which seems to be the case, the sub-boundary and super-boundary experiences may be effectively indistinguishable. Are you willing to accept this?
Perhaps you are. Now this gets an interesting turn. Consider a couple of scenarios: X, which is slightly sub-boundary (thus “torture”) with utility B - ε (ε positive), and Y, which is non-torture with u(Y) = B + ε. Now utilities may behave non-linearly with respect to the scenario-describing parameters, but expected utilities have to be pretty linear with respect to probabilities; anything else means throwing utilitarianism out from the window. A utility maximiser should therefore be indifferent between scenarios X’ and Y’, where X’ = X with probability p and Y’ = Y with probability p (B - ε) / (B + ε).
Lets say one of the boundary cases is, for sake of concreteness, giving a person 7.5 seconds long electric shock of a given strength. So, you may prefer to give a billion people 7.4999 s shock in order to avoid one person getting a 7.5001 s shock, but in the same time you would prefer, say, 99.98% chance of one person getting 7.5001 s shock to 99.99% chance of one person getting 7.4999 s shock. Thus, although the torture/non-torture boundary seems strict, it can be easily crossed when uncertainty is taken into account.
(This problem can be alleviated by postulating a gap in utilities between the worst non-torture scenario and the best torture scenario.)
If it still doesn’t sound enough crazy, note the fact that if there already are people experiencing an almost boundary (but still non-torturous) scenario, decisions over completely unrelated options get distorted, since your utility can’t fall lower than B, where it already sits. Assume that one has presently utility near B (which must be achievable by adjusting the number of almost tortured people and severity of their inconvenience—which is nevertheless still not torture, nobody is tortured as far as you know—let’s call this adjustment A). Consider now decisions about money. If W is one’s total wealth, then u(W,A) must be convex with respect to W if it’s value is not much different from B, since no everywhere concave function can be bounded from below. Now, this may invert the usual risk aversion due to diminishing marginal utilities! (Even assuming that you can do literally nothing to change A).
(This isn’t alleviated by a utility gap between torture and non-torture.)
Now, consider the second case, B = -∞. Then there is another problem: torture becomes the sole concern of one’s decisions. Even if p(torture) = 1/3^^^3, the expected utility is negative infinity and all non-torturous concerns become strictly irrelevant. One can formulate it mathematically as having a 2-dimensional vector (u1,u2) representing the utility. The first component u1 is the measure of utility from torture and u2 measures the other utility. Now since you have decided to never trade torture for non-torture, you should choose the variant whose expected u1 is greater; only when u1(X) and u1(Y) are strictly equal, whether u2(X) > u2(Y) becomes important. Therefore you would find yourself asking questions like “if I buy this banana, would it increase the chance of people getting tortured?”. I don’t think you are striving to consistently apply this decision theory.
(This is related to distinction between sacred and unsacred values, which is a fairly standard source of inconsistencies in intuitive decisions.)
Your reference to sacred values reminded me of Spheres of Justice. In brief, Walzer argues that the best way of describing our morality is by noting which values may not be exchanged for which other values. For example, it is illicit to trade material wealth for political power over others (i.e. bribery is bad). Or trade lives for relief from suffering. But it is permissible to trade within a sphere (money for ice cream) or between some spheres (dowries might be a historical example, but I can’t think of a modern one just this moment).
It seems like your post is a mathematical demonstration that I cannot believe the Spheres of Justice argument and also be a utilitarian. Hadn’t thought about it that way before.
I hear your general point, and I don’t dispute it.
But I think your set theory analogy isn’t quite right. Consider the set R - [0,1] That’s all real numbers less than 0 or greater than 1. This is still uncountably infinite, and has equal cardinality to R, even though I removed the set [0,1], which is itself uncountably infinite.
Edited to remove improper math. Thanks.
X(0)) is a smaller value of anti-utility than X(1)), absolutely. I do not, however, know that the decrease of one second is non-negligible for that measurement of anti-utility, under the definitions I have provided.
That math gets ugly to try to conceptualize (fractional values of fractional values), but I can appreciate the intention.
This is a non-trivial alteration to the argument, but I will stipulate it for the time being.
“Clearly”? I suffer from opacity you apparently lack; I cannot distinguish between the two.
The paradox exists only if suffering is quantified linearly. If it is quantified logarithmically, a one-billionth shift on some position of the logarithmic scale is going to overwhelm the signal of the linearly-multiplicative increasing population of individuals. (Please note that this quantification is on a per-individual basis, which can once quantified be simply added.)
This is far from being a paradox: it is a natural and expected consequence.
Then substitute “worse or equal” for “worse”, the argument remains.
Same thing, doesn’t matter whether it is or it isn’t. The only things which matters is that X(n) is preferable or equal to X(n+1), and that “specks” is worse or equal to X(3.8 * 10^10). If “specks” is also preferable to X(0), we have circular preferences.
So, you are saying that there indeed is n such that X(n) is worse than X(n+1); it means that there are t and p such that burning p percent of one person’s skin for t seconds is worse than 0.999999999 t seconds of burning 0.999999999 p percent of skins of ten people. Do I interpret it correctly?
Edited: “worse” substituted for “preferable” in the 2nd answer.
Yes.