I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don’t actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that’s infinitely worse than another category.
So, I’d say that you aren’t preferring morality; you are simply placing 50 years of torture as infinitely worse than a dust speck; no number people getting dust specks can possibly be worse than 50 years of torture.
I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don’t actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that’s infinitely worse than another category.
The thing is, if you think that A and B aren’t comparable, with A>B, and if you don’t make some simplifying assumption like “any event with P < 0.01 is unworthy of consideration, no matter how great or awful” or something, then you don’t get to ever care about B for a moment. There’s always some tiny chance of A that has to completely dominate your decision-making.
After reading this several times, I have to conclude that I don’t understand what “comparable” means in this comment. Otherwise, I have no idea how one could thinking both that A and B aren’t comparable and that A > B.
even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that’s infinitely worse than another category.
I mean “comparable” as the negation of this line of thought.
So, you meant something like: if I think A is worse than B, but not infinitely worse than B, and I don’t have some kind of threshold (e.g., a threshold of probability) below which I no longer evaluate expected utility of events at all, then my beliefs about B are irrelevant to my decisions because my decisions are entirely driven by my beliefs about A?
I mean, that’s trivially true, in the sense that a false premise justifies any conclusion, and any finite system will have some threshold for which events not to evaluate.
This is a good point, and I’ve pondered on this for a while.
Following your logic: we can observe that I’m not spending all my waking time caring about A (people dying somewhere for some reason). Therefore we can conclude that the death of those people is comparable to mundane things I choose to do instead—i.e. the mundane things are not infinitely less important than someone’s death.
But this only holds if my decision to do the mundane things in preference to saving someone’s life is rational.
I’m still wondering whether I do the mundane things by rationally deciding that they are more important than my contribution to saving someone’s life could be, or by simply being irrational.
I am leaning towards the latter—which means that someone’s death could still be infinitely worse to me than something mundane, except that this fact is not accounted for in my decision making because I am not fully rational no matter how hard I try.
We defined a dust speck to have nonzero negative utility. If you don’t think this describes reality, then you can substitute something else, like a stubbed toe.
As long as we can make a series of things, none of which is infinitely worse than the next, we can prove that nothing in the list is infinitely worse than any other. http://lesswrong.com/lw/n3/circular_altruism/u7x presents this well.
As long as we can make a series of things, none of which is infinitely worse than the next, we can prove that nothing in the list is infinitely worse than any other.
This is true, but not relevant to the question of whether 50 years of torture is infinitely worse than a speck of dust in the eye.
So is a single millisecond of torture also infinitely worse than a dust speck and as well as infinitely worse than everything else that isn’t infinitely worse than a dust speck itself, or is some time span of torture infinitely worse than a slightly shorter time span? If you postulate a discontinuity that discontinuity has to be somewhere.
I guess this is what I get for replying to a torture post!
The point I was trying to make is mathematical: for sensible definitions of “finitely greater” the statement “if we have a sequence of objects, each of which is only finitely greater than its predecessor, then every object on the list is only finitely greater than eany earlier object” is true, but not relevant to the question of whether or not there exist infinitely large objects.
My goal was to flag up mathematical reasoning that doesn’t hold water, which apparently I failed to do.
For completeness, I should also mention that the linked post does not make the same error.
The point I was trying to make is mathematical: for sensible definitions of “finitely greater” the statement “if we have a sequence of objects, each of which is only finitely greater than its predecessor, then every object on the list is only finitely greater than eany earlier object” is true, but not relevant to the question of whether or not there exist infinitely large objects.
But extremely relevant to the question whether or not there exist infinitely large objects on the list.
Not at all. People who treat some things as infinitely worse than others don’t do so because they believe that a list that includes both somehow stops being a list, and the threat starter never implied anything in that direction. They just have inconsistent preferences (at least in the sense of being money-pumpable). Either that or they bite the bullet and admit that there is at least one particular item infinitely worse than the preceding for any such list. Denying that a list is a list is just nonsense.
We are in violent agreement (but I’m coming off worse!).
rstarkov suggested that people may have “utility functions” that don’t take real values.
Endoself’s comment “showed” that this cannot be, starting from the assumption that everybody has a preference system that can be encoded as a real-valued utility function. This is nonsense.
My non-disagreement with you seems to have stemmed from me not wanting to be the first person to say “order-type”, and us making different assumptions about how various poster’s positions projected onto our own internal models of “lists” (whatever they were).
You shouldn’t have used the worlds “not relevant”, that implied the statement had no important implications for the problem at all, rather than proving the (very relevant since the topic is ulilitarism) hidden assumption wrong for that set of people (unless they bit the bullet).
It absolutely assumes that the two are comparable, and most of the smarter objections to it that I’ve seen invoke some kind of filtering function to zero out the impact of any particular dust speck on some level of comparison.
There are a number of objections to this that you could raise in practice: given a random distribution of starting values, for example, an additional dust speck would be sufficient to push a small percentage, but an unimaginably huge quantity, of victims’ subjective suffering over any threshold of significance we feel like choosing. I’m not too impressed with any of these responses—they generally seem to leverage special pleading on some level—but I’ve got to admit that they don’t have anything wrong with them that the filtering argument doesn’t.
Argh, I have accidentally reported your comment instead of replying. I did wonder why it asks me if I’m sure… Sorry.
It does indeed appear that the only rational approach is for them to be treated as comparable. I was merely trying to suggest a possible underlying basis for people consistently picking dust specks, regardless of the hugeness of the numbers involved.
I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don’t actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that’s infinitely worse than another category.
So, I’d say that you aren’t preferring morality; you are simply placing 50 years of torture as infinitely worse than a dust speck; no number people getting dust specks can possibly be worse than 50 years of torture.
The thing is, if you think that A and B aren’t comparable, with A>B, and if you don’t make some simplifying assumption like “any event with P < 0.01 is unworthy of consideration, no matter how great or awful” or something, then you don’t get to ever care about B for a moment. There’s always some tiny chance of A that has to completely dominate your decision-making.
After reading this several times, I have to conclude that I don’t understand what “comparable” means in this comment. Otherwise, I have no idea how one could thinking both that A and B aren’t comparable and that A > B.
I mean “comparable” as the negation of this line of thought.
Ah.
So, you meant something like: if I think A is worse than B, but not infinitely worse than B, and I don’t have some kind of threshold (e.g., a threshold of probability) below which I no longer evaluate expected utility of events at all, then my beliefs about B are irrelevant to my decisions because my decisions are entirely driven by my beliefs about A?
I mean, that’s trivially true, in the sense that a false premise justifies any conclusion, and any finite system will have some threshold for which events not to evaluate.
But in a less trivial sense… hm.
OK, thanks for clarifying.
This is a good point, and I’ve pondered on this for a while.
Following your logic: we can observe that I’m not spending all my waking time caring about A (people dying somewhere for some reason). Therefore we can conclude that the death of those people is comparable to mundane things I choose to do instead—i.e. the mundane things are not infinitely less important than someone’s death.
But this only holds if my decision to do the mundane things in preference to saving someone’s life is rational.
I’m still wondering whether I do the mundane things by rationally deciding that they are more important than my contribution to saving someone’s life could be, or by simply being irrational.
I am leaning towards the latter—which means that someone’s death could still be infinitely worse to me than something mundane, except that this fact is not accounted for in my decision making because I am not fully rational no matter how hard I try.
We defined a dust speck to have nonzero negative utility. If you don’t think this describes reality, then you can substitute something else, like a stubbed toe.
As long as we can make a series of things, none of which is infinitely worse than the next, we can prove that nothing in the list is infinitely worse than any other. http://lesswrong.com/lw/n3/circular_altruism/u7x presents this well.
This is true, but not relevant to the question of whether 50 years of torture is infinitely worse than a speck of dust in the eye.
So is a single millisecond of torture also infinitely worse than a dust speck and as well as infinitely worse than everything else that isn’t infinitely worse than a dust speck itself, or is some time span of torture infinitely worse than a slightly shorter time span? If you postulate a discontinuity that discontinuity has to be somewhere.
I guess this is what I get for replying to a torture post!
The point I was trying to make is mathematical: for sensible definitions of “finitely greater” the statement “if we have a sequence of objects, each of which is only finitely greater than its predecessor, then every object on the list is only finitely greater than eany earlier object” is true, but not relevant to the question of whether or not there exist infinitely large objects.
My goal was to flag up mathematical reasoning that doesn’t hold water, which apparently I failed to do.
For completeness, I should also mention that the linked post does not make the same error.
But extremely relevant to the question whether or not there exist infinitely large objects on the list.
Whether or not everybody has a list is precisely the question asked in the top post of this thread.
Not at all. People who treat some things as infinitely worse than others don’t do so because they believe that a list that includes both somehow stops being a list, and the threat starter never implied anything in that direction. They just have inconsistent preferences (at least in the sense of being money-pumpable). Either that or they bite the bullet and admit that there is at least one particular item infinitely worse than the preceding for any such list. Denying that a list is a list is just nonsense.
We are in violent agreement (but I’m coming off worse!).
rstarkov suggested that people may have “utility functions” that don’t take real values.
Endoself’s comment “showed” that this cannot be, starting from the assumption that everybody has a preference system that can be encoded as a real-valued utility function. This is nonsense.
My non-disagreement with you seems to have stemmed from me not wanting to be the first person to say “order-type”, and us making different assumptions about how various poster’s positions projected onto our own internal models of “lists” (whatever they were).
You shouldn’t have used the worlds “not relevant”, that implied the statement had no important implications for the problem at all, rather than proving the (very relevant since the topic is ulilitarism) hidden assumption wrong for that set of people (unless they bit the bullet).
It absolutely assumes that the two are comparable, and most of the smarter objections to it that I’ve seen invoke some kind of filtering function to zero out the impact of any particular dust speck on some level of comparison.
There are a number of objections to this that you could raise in practice: given a random distribution of starting values, for example, an additional dust speck would be sufficient to push a small percentage, but an unimaginably huge quantity, of victims’ subjective suffering over any threshold of significance we feel like choosing. I’m not too impressed with any of these responses—they generally seem to leverage special pleading on some level—but I’ve got to admit that they don’t have anything wrong with them that the filtering argument doesn’t.
Welcome to Less Wrong, by the way.
Argh, I have accidentally reported your comment instead of replying. I did wonder why it asks me if I’m sure… Sorry.
It does indeed appear that the only rational approach is for them to be treated as comparable. I was merely trying to suggest a possible underlying basis for people consistently picking dust specks, regardless of the hugeness of the numbers involved.
You did report it; I’ve ignored the report and now it is gone.