and you also don’t seem to have fully read the grandparent comment to this one, judging by your question
Do you mean the a), b), c) comment? Which section did I miss?
Either way, I ask: do you prefer destroying an arbitrary amount of wealth (not yours) to any probability of your grandma dying? At least give a Yes/No.
Take a look at the other example I cited.
From some book? You know, it would be great if your arguments were contained in your comments.
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!)
Edited to add:
Either way, I ask: do you prefer destroying an arbitrary amount of wealth (not yours) to any probability of your grandma dying? At least give a Yes/No.
What difference does it make?
If I say “yes”, then we can have the same conversation as the one about the chickens. It’s just another example of the same thing.
If I say “no”, then it’s not a relevant example at all and there’s no reason to discuss it further.
This is a totally pointless line of inquiry; this is the last I’ll say about it.
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!)
Where? I didn’t see any such things in the LW comments I found. Are there more threads? Are you going to link to them? You’ve made a big claim, and I haven’t seen nearly enough defense for it.
What difference does it make?
Of course, the question is such that I get to feel right either way. If you say “no”, then I can deduce that you don’t understand what “wealth” is. If you say “yes”, then I can deduce that you’re a sociopath with poor understanding of cause and effect. Charitably, I could imagine that you were talking about destroying chickens in some parallel universe, where their destruction could 100% certainly not have consequences for you, but that’s a silly scenario too.
Regarding the grandma-chicken argument, having given it some thought, I think I understand it better now. I’d explain it like this. There is a utility function u, such that all of my actions maximize Eu. Suppose that u(A) = u(B) for some two choices A, B. Then I can claim that A > B, and exhibit this preference in my choices, i.e. given a choice between A and B I would always choose A. However for every B+, such that u(B+) > u(B) I would also claim B < A < B+. This does violate continuity, however because I’m still maximizing Eu, my actions can’t be called irrational, and the function u is hardly any less useful than it would be without the violation.
Finally I read your link. So the main argument is that there is a preference between different probability distributions over utility, even if expected utility is the same. This is intuitively understandable, but I find it lacking specificity.
I propose the following three step experiment. First a human chooses a distribution X from two choices (X=A or X=B). Then we randomly draw a number P from the selected distribution X, then we try to win 1$ with probability P (and 0$ otherwise, which I’ll ignore by setting u(0$)=0, because I can). Here you can plot X as a distribution over expected utility, which equals P times u(1$). The claim is that some distributions X are more preferable to others, despite what pure utility calculations say. I.e. Eu(A) > Eu(B), but a human would choose B over A and would not be irrational. Do you agree that this experiment accurately represents Dawes claim?
Naturally, I find the argument bad. The double lottery can be easily collapsed into a single lottery, the final probabilities can be easily computed (which is what Eu does). If P(win 1$ | A) = P(win 1$ | B) then you’re free to make either choice, but if P(win 1$ | A) > P(win 1$ | B) even by a hair, and you choose B, you’re being irrational. Note that the choices of 0$ and 1$ as the prizes are completely arbitrary.
I’m afraid that the moderation policy on this site does not permit me to do so effectively
Are you referring to that one moderation note? I think you’re overreacting.
Do you mean the a), b), c) comment? Which section did I miss?
Either way, I ask: do you prefer destroying an arbitrary amount of wealth (not yours) to any probability of your grandma dying? At least give a Yes/No.
From some book? You know, it would be great if your arguments were contained in your comments.
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!)
Edited to add:
What difference does it make?
If I say “yes”, then we can have the same conversation as the one about the chickens. It’s just another example of the same thing.
If I say “no”, then it’s not a relevant example at all and there’s no reason to discuss it further.
This is a totally pointless line of inquiry; this is the last I’ll say about it.
Where? I didn’t see any such things in the LW comments I found. Are there more threads? Are you going to link to them? You’ve made a big claim, and I haven’t seen nearly enough defense for it.
Of course, the question is such that I get to feel right either way. If you say “no”, then I can deduce that you don’t understand what “wealth” is. If you say “yes”, then I can deduce that you’re a sociopath with poor understanding of cause and effect. Charitably, I could imagine that you were talking about destroying chickens in some parallel universe, where their destruction could 100% certainly not have consequences for you, but that’s a silly scenario too.
http://www.greaterwrong.com/posts/g9msGr7DDoPAwHF6D/to-what-extent-does-improved-rationality-lead-to-effective#kpMS4usW5rvyGkFgM
Regarding the grandma-chicken argument, having given it some thought, I think I understand it better now. I’d explain it like this. There is a utility function u, such that all of my actions maximize Eu. Suppose that u(A) = u(B) for some two choices A, B. Then I can claim that A > B, and exhibit this preference in my choices, i.e. given a choice between A and B I would always choose A. However for every B+, such that u(B+) > u(B) I would also claim B < A < B+. This does violate continuity, however because I’m still maximizing Eu, my actions can’t be called irrational, and the function u is hardly any less useful than it would be without the violation.
Please see https://www.lesserwrong.com/posts/3mFmDMapHWHcbn7C6/a-simple-two-axis-model-of-subjective-states-with-possible/wpT7LwqLnzJYFMveS.
Finally I read your link. So the main argument is that there is a preference between different probability distributions over utility, even if expected utility is the same. This is intuitively understandable, but I find it lacking specificity.
I propose the following three step experiment. First a human chooses a distribution X from two choices (X=A or X=B). Then we randomly draw a number P from the selected distribution X, then we try to win 1$ with probability P (and 0$ otherwise, which I’ll ignore by setting u(0$)=0, because I can). Here you can plot X as a distribution over expected utility, which equals P times u(1$). The claim is that some distributions X are more preferable to others, despite what pure utility calculations say. I.e. Eu(A) > Eu(B), but a human would choose B over A and would not be irrational. Do you agree that this experiment accurately represents Dawes claim?
Naturally, I find the argument bad. The double lottery can be easily collapsed into a single lottery, the final probabilities can be easily computed (which is what Eu does). If P(win 1$ | A) = P(win 1$ | B) then you’re free to make either choice, but if P(win 1$ | A) > P(win 1$ | B) even by a hair, and you choose B, you’re being irrational. Note that the choices of 0$ and 1$ as the prizes are completely arbitrary.
Are you referring to that one moderation note? I think you’re overreacting.
I would love to respond to your comment, and will certainly do so, but not here. Let me know what other venue you prefer.
I’m afraid not.
I think that he set the mind experiment in the Least convenient possible world. So your last hypothesis is right.