Robyn Dawes describes one class of such violations in his Rational Choice in an Uncertain World. (Edit: And he makes the case—quite convincingly, IMO—that such violations are not irrational.) You can search my old LessWrong comments and find some threads where I explain this. If you also search my comments for keywords “grandmother” and “chicken”, you’ll find some more examples.
If you can’t find this stuff, I’ll take some time to find it myself at some point, but not right now, sorry.
Let us say I prefer the nonextinction of chickens to their extinction (that is, I would choose not to murder all chickens, or any chickens, all else being equal). I also prefer my grandmother remaining alive to my grandmother dying. Finally, I prefer the deaths of arbitrary numbers of chickens, taking place with any probability, to any probability of my grandmother dying.
Would you also prefer losing an arbitrary amount of money to any probability of your grandmother dying? I think chicken can be converted into money, so you should prefer this as well. I’m hoping that you’ll find this preference equivalent, but then find that your actions don’t actually follow it.
a) Chickens certainly can’t be converted into money (in the sense you mean)
b) Even if they could be, the comparison is nonsensical, because in the money case, we’re talking about my money, whereas in the chickens case we’re talking about chickens existing in the world (none of which I own)
c) That aside, I do not, in fact, prefer losing an arbitrary amount of money to any probability of my grandmother dying (but I do prefer losing quite substantial amounts of money to relatively small probabilities of my grandmother coming to any harm, and my actions certainly do follow this)
Chickens are real wealth owned by real people. Pressing a magical button that destroys all chickens would do massive damage to the well being of many people. So, you’re not willing to sacrifice your own wealth for tiny reductions in probability of dead grandma, but you’d gladly sacrifice the wealth of other people? That would make you a bad person. And the economic damage would end up affecting you eventually anyway.
I rather think you’ve missed most, if not all, of the point of that hypothetical (and you also don’t seem to have fully read the grandparent comment to this one, judging by your question).
Perhaps we should set the grandmother/chickens example aside for now, as we’re approaching the limit of how much explaining I’m willing to do (given that the threads where I originally discussed this are quite long and answer all these questions).
and you also don’t seem to have fully read the grandparent comment to this one, judging by your question
Do you mean the a), b), c) comment? Which section did I miss?
Either way, I ask: do you prefer destroying an arbitrary amount of wealth (not yours) to any probability of your grandma dying? At least give a Yes/No.
Take a look at the other example I cited.
From some book? You know, it would be great if your arguments were contained in your comments.
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!)
Edited to add:
Either way, I ask: do you prefer destroying an arbitrary amount of wealth (not yours) to any probability of your grandma dying? At least give a Yes/No.
What difference does it make?
If I say “yes”, then we can have the same conversation as the one about the chickens. It’s just another example of the same thing.
If I say “no”, then it’s not a relevant example at all and there’s no reason to discuss it further.
This is a totally pointless line of inquiry; this is the last I’ll say about it.
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!)
Where? I didn’t see any such things in the LW comments I found. Are there more threads? Are you going to link to them? You’ve made a big claim, and I haven’t seen nearly enough defense for it.
What difference does it make?
Of course, the question is such that I get to feel right either way. If you say “no”, then I can deduce that you don’t understand what “wealth” is. If you say “yes”, then I can deduce that you’re a sociopath with poor understanding of cause and effect. Charitably, I could imagine that you were talking about destroying chickens in some parallel universe, where their destruction could 100% certainly not have consequences for you, but that’s a silly scenario too.
Regarding the grandma-chicken argument, having given it some thought, I think I understand it better now. I’d explain it like this. There is a utility function u, such that all of my actions maximize Eu. Suppose that u(A) = u(B) for some two choices A, B. Then I can claim that A > B, and exhibit this preference in my choices, i.e. given a choice between A and B I would always choose A. However for every B+, such that u(B+) > u(B) I would also claim B < A < B+. This does violate continuity, however because I’m still maximizing Eu, my actions can’t be called irrational, and the function u is hardly any less useful than it would be without the violation.
Finally I read your link. So the main argument is that there is a preference between different probability distributions over utility, even if expected utility is the same. This is intuitively understandable, but I find it lacking specificity.
I propose the following three step experiment. First a human chooses a distribution X from two choices (X=A or X=B). Then we randomly draw a number P from the selected distribution X, then we try to win 1$ with probability P (and 0$ otherwise, which I’ll ignore by setting u(0$)=0, because I can). Here you can plot X as a distribution over expected utility, which equals P times u(1$). The claim is that some distributions X are more preferable to others, despite what pure utility calculations say. I.e. Eu(A) > Eu(B), but a human would choose B over A and would not be irrational. Do you agree that this experiment accurately represents Dawes claim?
Naturally, I find the argument bad. The double lottery can be easily collapsed into a single lottery, the final probabilities can be easily computed (which is what Eu does). If P(win 1$ | A) = P(win 1$ | B) then you’re free to make either choice, but if P(win 1$ | A) > P(win 1$ | B) even by a hair, and you choose B, you’re being irrational. Note that the choices of 0$ and 1$ as the prizes are completely arbitrary.
I’m afraid that the moderation policy on this site does not permit me to do so effectively
Are you referring to that one moderation note? I think you’re overreacting.
This seems like a weird preference to have. This de-facto means that you would never pay any attention whatsoever to the live’s of chicken, since any infinitesimally small change to the probability of your grandmother dying will outweigh any potential moral relevance. For all practical purposes in our world (which is interconnected to a degree that almost all actions will have some potential consequences to your grandmother), an agent following this preference would be indistinguishable from someone who does not care at all about chickens.
This de-facto means that you would never pay any attention whatsoever to the live’s of chicken
Only if that agent has a grandmother.
Suppose my grandmother (may she live to be 120) were to die. My preferences about the survival of chickens would now come into play. This is hardly an exotic scenario! There are many parallel constructions we can imagine. (Or do you propose that we decline to have preferences that bear only on possible future situations, not currently possible ones?)
Edited to add:
This is called “lexicographic preferences”, and it too is hardly exotic or unprecedented.
(end edit)
+++
Of course, even that is moot if we reject the proposition that “our world … is interconnected to a degree that almost all actions will have some potential consequences to your grandmother”.
And there are good reasons to reject it. If nothing else, it’s a fact that given sufficiently small probabilities, we humans are not capable of considering numbers of such precision, and so it seems strange to speak of basing our choices on them! There is also noise in measurement, errors in calculation, inaccuracies in the model, uncertainty, and a host of other factors that add up to the fact that in practice, “almost all actions” will, in fact, have no (foreseeable) consequences for my grandmother.
The value of information of finding out the consequences that any action has on the life of your grandmother is infinitely larger than the value you would assign to any number of chickens. De-facto this means that even if your grandmother is dead, as long as you are not literally 100% certain that she is dead and forever gone and could not possibly be brought back, you completely ignore the plight of chickens.
The fact that he is not willing to kill his grandmother to save the chickens doesn’t imply that chickens have 0 value or that his grandmother has infinite value.
Consider the problem from an egocentric point of view: to be responsible for one’s grandmother’s death feels awful, but also dedicating your life to a very unlikely possibility to save someone who has been declared dead, seems awful.
Can you describe the violation of continuity you observe in humans?
Robyn Dawes describes one class of such violations in his Rational Choice in an Uncertain World. (Edit: And he makes the case—quite convincingly, IMO—that such violations are not irrational.) You can search my old LessWrong comments and find some threads where I explain this. If you also search my comments for keywords “grandmother” and “chicken”, you’ll find some more examples.
If you can’t find this stuff, I’ll take some time to find it myself at some point, but not right now, sorry.
Found it here
Would you also prefer losing an arbitrary amount of money to any probability of your grandmother dying? I think chicken can be converted into money, so you should prefer this as well. I’m hoping that you’ll find this preference equivalent, but then find that your actions don’t actually follow it.
a) Chickens certainly can’t be converted into money (in the sense you mean)
b) Even if they could be, the comparison is nonsensical, because in the money case, we’re talking about my money, whereas in the chickens case we’re talking about chickens existing in the world (none of which I own)
c) That aside, I do not, in fact, prefer losing an arbitrary amount of money to any probability of my grandmother dying (but I do prefer losing quite substantial amounts of money to relatively small probabilities of my grandmother coming to any harm, and my actions certainly do follow this)
Chickens are real wealth owned by real people. Pressing a magical button that destroys all chickens would do massive damage to the well being of many people. So, you’re not willing to sacrifice your own wealth for tiny reductions in probability of dead grandma, but you’d gladly sacrifice the wealth of other people? That would make you a bad person. And the economic damage would end up affecting you eventually anyway.
I rather think you’ve missed most, if not all, of the point of that hypothetical (and you also don’t seem to have fully read the grandparent comment to this one, judging by your question).
Perhaps we should set the grandmother/chickens example aside for now, as we’re approaching the limit of how much explaining I’m willing to do (given that the threads where I originally discussed this are quite long and answer all these questions).
Take a look at the other example I cited.
Do you mean the a), b), c) comment? Which section did I miss?
Either way, I ask: do you prefer destroying an arbitrary amount of wealth (not yours) to any probability of your grandma dying? At least give a Yes/No.
From some book? You know, it would be great if your arguments were contained in your comments.
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!)
Edited to add:
What difference does it make?
If I say “yes”, then we can have the same conversation as the one about the chickens. It’s just another example of the same thing.
If I say “no”, then it’s not a relevant example at all and there’s no reason to discuss it further.
This is a totally pointless line of inquiry; this is the last I’ll say about it.
Where? I didn’t see any such things in the LW comments I found. Are there more threads? Are you going to link to them? You’ve made a big claim, and I haven’t seen nearly enough defense for it.
Of course, the question is such that I get to feel right either way. If you say “no”, then I can deduce that you don’t understand what “wealth” is. If you say “yes”, then I can deduce that you’re a sociopath with poor understanding of cause and effect. Charitably, I could imagine that you were talking about destroying chickens in some parallel universe, where their destruction could 100% certainly not have consequences for you, but that’s a silly scenario too.
http://www.greaterwrong.com/posts/g9msGr7DDoPAwHF6D/to-what-extent-does-improved-rationality-lead-to-effective#kpMS4usW5rvyGkFgM
Regarding the grandma-chicken argument, having given it some thought, I think I understand it better now. I’d explain it like this. There is a utility function u, such that all of my actions maximize Eu. Suppose that u(A) = u(B) for some two choices A, B. Then I can claim that A > B, and exhibit this preference in my choices, i.e. given a choice between A and B I would always choose A. However for every B+, such that u(B+) > u(B) I would also claim B < A < B+. This does violate continuity, however because I’m still maximizing Eu, my actions can’t be called irrational, and the function u is hardly any less useful than it would be without the violation.
Please see https://www.lesserwrong.com/posts/3mFmDMapHWHcbn7C6/a-simple-two-axis-model-of-subjective-states-with-possible/wpT7LwqLnzJYFMveS.
Finally I read your link. So the main argument is that there is a preference between different probability distributions over utility, even if expected utility is the same. This is intuitively understandable, but I find it lacking specificity.
I propose the following three step experiment. First a human chooses a distribution X from two choices (X=A or X=B). Then we randomly draw a number P from the selected distribution X, then we try to win 1$ with probability P (and 0$ otherwise, which I’ll ignore by setting u(0$)=0, because I can). Here you can plot X as a distribution over expected utility, which equals P times u(1$). The claim is that some distributions X are more preferable to others, despite what pure utility calculations say. I.e. Eu(A) > Eu(B), but a human would choose B over A and would not be irrational. Do you agree that this experiment accurately represents Dawes claim?
Naturally, I find the argument bad. The double lottery can be easily collapsed into a single lottery, the final probabilities can be easily computed (which is what Eu does). If P(win 1$ | A) = P(win 1$ | B) then you’re free to make either choice, but if P(win 1$ | A) > P(win 1$ | B) even by a hair, and you choose B, you’re being irrational. Note that the choices of 0$ and 1$ as the prizes are completely arbitrary.
Are you referring to that one moderation note? I think you’re overreacting.
I would love to respond to your comment, and will certainly do so, but not here. Let me know what other venue you prefer.
I’m afraid not.
I think that he set the mind experiment in the Least convenient possible world. So your last hypothesis is right.
This seems like a weird preference to have. This de-facto means that you would never pay any attention whatsoever to the live’s of chicken, since any infinitesimally small change to the probability of your grandmother dying will outweigh any potential moral relevance. For all practical purposes in our world (which is interconnected to a degree that almost all actions will have some potential consequences to your grandmother), an agent following this preference would be indistinguishable from someone who does not care at all about chickens.
Only if that agent has a grandmother.
Suppose my grandmother (may she live to be 120) were to die. My preferences about the survival of chickens would now come into play. This is hardly an exotic scenario! There are many parallel constructions we can imagine. (Or do you propose that we decline to have preferences that bear only on possible future situations, not currently possible ones?)
Edited to add:
This is called “lexicographic preferences”, and it too is hardly exotic or unprecedented.
(end edit)
+++
Of course, even that is moot if we reject the proposition that “our world … is interconnected to a degree that almost all actions will have some potential consequences to your grandmother”.
And there are good reasons to reject it. If nothing else, it’s a fact that given sufficiently small probabilities, we humans are not capable of considering numbers of such precision, and so it seems strange to speak of basing our choices on them! There is also noise in measurement, errors in calculation, inaccuracies in the model, uncertainty, and a host of other factors that add up to the fact that in practice, “almost all actions” will, in fact, have no (foreseeable) consequences for my grandmother.
The value of information of finding out the consequences that any action has on the life of your grandmother is infinitely larger than the value you would assign to any number of chickens. De-facto this means that even if your grandmother is dead, as long as you are not literally 100% certain that she is dead and forever gone and could not possibly be brought back, you completely ignore the plight of chickens.
The fact that he is not willing to kill his grandmother to save the chickens doesn’t imply that chickens have 0 value or that his grandmother has infinite value.
Consider the problem from an egocentric point of view: to be responsible for one’s grandmother’s death feels awful, but also dedicating your life to a very unlikely possibility to save someone who has been declared dead, seems awful.
Stuart wrote a post about this a while ago, though it’s not the most understandable.