From a statistical standpoint, lottery winners don’t exist—you would never encounter one in your lifetime, if it weren’t for the selective reporting.
When you said that, it seemed to me that you were saying that you shouldn’t play the lottery even if the expected payoff—or even the expected utility—were positive, because the payoff would happen so rarely.
Does that mean you have a formulation for rational behavior that maximizes something other than expected utility? Some nonlinear way of summing the utility from all possible worlds?
If someone suggested that everyone in the world should pool their money together, and give it to one person selected at random (pretend for the sake of argument that utility = money), people would think that was crazy. Yet the idea of maximizing expected utility over all possible worlds assumes that an uneven distribution of utility to all your possible future selves is as good as an equitable distribution among them. So there’s something wrong with maximizing expected utility.
Broken intuition pump. The fact that money isn’t utility (has diminishing returns) is actually very important here. I, for one, don’t think I can envision pooling and redistributing actual utility, at least not well enough to draw any conclusions whatsoever.
Also, a utility function might not be defined over selves at particular times, but over 4D universal histories, or even over the entire multiverse. (This is also relevant to your happiness vs. utility distinction, I think.)
What I’m getting at is that the decision society makes for how to distribute utility across different people, is very similar to the decision you make for how to distribute utility across your possible future selves.
Why do we think it’s reasonable to say that we should maximize average utility across all our possible future selves, when no one I know would say that we should maximize average utility across all living people?
Nothing so exotic. In game theory agents can be risk-averse, risk-neutral or risk-loving. This translates to convexity/concavity of the utility function.
While you appear to be right about phil’s incorrect
interpretation, I don’t think he meant any malice
by it...however, you appear to me to have meant malice in return.
So, I think your comment borders on
unnecessary disrespect and if it were me who had
made the comment, I would edit it to make the
same point while sounding less hateful. If people
disagree with me, please down vote this comment. (Though
admittedly, if you edit your comment now, we won’t get good data,
so you probably should leave it as is.)
I admit that I’m not factoring in your entire history with phil much so you may have further justification of which I’m unaware, but my view I would expect to be shared even more by casual readers who don’t know either of you well. Maybe in that case, a comment like yours is fine, but only if delivered privately.
Agreed. Also, saying somebody is wrong and then not bothering to explain how does come across as somewhat rude, as it forces the other person to try to guess what they did wrong instead of providing more constructive feedback.
Phil does this a lot, usually in ways which present me with the dilemma of spending a lot of time correcting him, or letting others pick up a poor idea of what my positions are (because people have a poor ability to discount this kind of evidence). I’ve said as much to Phil, and he apparently thinks it’s fine to go on doing this—that it’s good for him to force me to correct him, even though others don’t make similar misinterpretations. Whether or not this is done from conscious malice doesn’t change the fact that it’s a behavior that forces me to expend resources or suffer a penalty, which is game-theoretically a hostile act.
So, to discourage this unpleasant behavior, it seems to me that rather than scratching his itch for his benefit (encouraging repetition), I should make some reply which encourages him not to do it again.
I would like to just reply: “Phil Goetz repeatedly misinterprets what I’m saying in an attempt to force me to correct him, which I consider very annoying behavior and have asked him to stop.” If that’s not what Phil intends.… well, see how it feels to be misinterpreted, Phil? Unfortunately this comes too close to lying for my tastes, so I’ll have to figure out some similar standard reply. Maybe even a standard comment to link to each time he does this.
Ok, I soften my critique given your reply which made a point I hadn’t fully considered. It sounds like the public disrespect is intentional, and it does have a purpose..
To be a good thing to do, you need to believe, among other things:
Publicly doing that is more likely to make him stop relative to privately doing it. (Seems plausible).
You’re not losing something greater than the wasted time by other people observing your doing it. (Unclear to me)
It would be better I think if you could just privately charge someone for the time wasted;but it does seem unlikely phil would agree to that.
I think your suggestion of linking to a fairly respectful but forceful reply works pretty well for the time being.
Sure. And my standard reply will be, “Eliezer repeatedly claims that I’m misinterpreting him in order to avoid addressing inconsistencies or ambiguities in what he has said.”
You may not have noticed that I was accusing you of being insightful.
I’m trying to be sensitive to your issues about this. So how would you have suggested that I phrase my comment? I said, “This is what Eliezer seems to be saying”, and asked if that was what you were saying. I don’t know what you want. You seem to be saying (and I have to say things like this, because in order to have a conversation with someone you have to try to figure out what they mean) “Shut up, Phil.”
In this case, when I said you seemed to be saying that rational decision-making about playing the lottery does not mean maximizing expected utility, I was just being polite. You said it. I quote:
Those who buy tickets will not win the lottery. If you think the chance is worth talking about, you’ve fallen prey to the fallacy yourself.
This says that the chance of winning the lottery is so low that you don’t need to do an expected utility calculation. I will not back down and pretend that I might be misinterpreting you in this instance. Maybe you meant to say something different, but this is what you said.
You’re tired of me trying to interpret what you say? Well, I’m tired of you trying to disclaim or ignore the logical consequences of what you say.
Eli tends to say stylistically:
“You will not ” for what others, when they’re thinking formally, express as “You very probably will not __”
This is only a language confusion between speakers. There are other related ones here, I’ll link to them later.
Telling someone to “win” versus “try to win” is a very similar issue.
To be exact, I say this when human brains undergo the failure mode of being unable to discount small probabilities. Vide: “But there’s still a chance, right?”
That’s not what’s at issue. The statement still says that the chance of winning is so low as not to be worth talking about. That implies that one does not calculate expected utility. My interpretation is correct. Eliezer has written 3 comments in reply, and is still trying to present it as if what is at issue here is that I consistently misrepresent him.
I am not misrepresenting him. My interpretation is correct. As has probably often been the case.
“That implies that one does not calculate expected utility.”
My impression has been that Eliezer means X and writes “Y” where Y could be interpreted to mean that Eliezer means either X or Z, you say “Eliezer means Z which implies this other obviously wrong thing”, and then Eliezer becomes upset because you have misinterpreted him and you become upset because he is ignoring your noting of the ambiguity of “Y”. Then hilarity is spawned.
The comment that started this now-tedious thread said:
When you said that, it seemed to me that you were saying that you shouldn’t play the lottery even if the expected payoff—or even the expected utility—were positive, because the payoff would happen so rarely.
Does that mean you have a formulation for rational behavior that maximizes something other than expected utility? Some nonlinear way of summing the utility from all possible worlds?
Sounds like asking to me. I clearly was not claiming to know what you were thinking.
Phil, I think you’re interpeting his claim too literally (relative to his intent).
He is only trying to help people who have a psychological inability to discount small probabilities appropriately.
Certainly if the lottery award grows high enough, standard decision theory implies you play
….this is one of the pascal’s mugging variants (similarly, whether to perform hypothetical exotic physics experiments with small probability of yielding infinite (or just extremely large) utility and large probability of destroying everything) which is not fully resolved for any of us, I think.
If a gun were put to my head and I had to decide right now, I agree with your irritation. However, he did make an interesting
point about public disrespect as a means of deterrence which deserves more thinking about. If that method looks promising after further inspection, we’d probably want
to reconsider its application to this situation, though it’s still unclear to me to what extent it applies in this case.
There’s also the consideration of total time expenditures on my part. Since the main reason I don’t respond at length to Goetz is his repeated behaviors that force me to expend large amounts of time or suffer penalties, elaborate time-consuming courtesies aren’t a solution either.
Phil may be more likely to misinterpret you than the most prolific contributors, but he is probably less likely to misinterpret you than most most readers of LW. I understand that this may be beside the point to you. I empathize, and wish I could think of a solution.
When you said that, it seemed to me that you were saying that you shouldn’t play the lottery even if the expected payoff—or even the expected utility—were positive, because the payoff would happen so rarely.
Does that mean you have a formulation for rational behavior that maximizes something other than expected utility? Some nonlinear way of summing the utility from all possible worlds?
If someone suggested that everyone in the world should pool their money together, and give it to one person selected at random (pretend for the sake of argument that utility = money), people would think that was crazy. Yet the idea of maximizing expected utility over all possible worlds assumes that an uneven distribution of utility to all your possible future selves is as good as an equitable distribution among them. So there’s something wrong with maximizing expected utility.
Broken intuition pump. The fact that money isn’t utility (has diminishing returns) is actually very important here. I, for one, don’t think I can envision pooling and redistributing actual utility, at least not well enough to draw any conclusions whatsoever.
Also, a utility function might not be defined over selves at particular times, but over 4D universal histories, or even over the entire multiverse. (This is also relevant to your happiness vs. utility distinction, I think.)
What I’m getting at is that the decision society makes for how to distribute utility across different people, is very similar to the decision you make for how to distribute utility across your possible future selves.
Why do we think it’s reasonable to say that we should maximize average utility across all our possible future selves, when no one I know would say that we should maximize average utility across all living people?
Nothing so exotic. In game theory agents can be risk-averse, risk-neutral or risk-loving. This translates to convexity/concavity of the utility function.
The winning payoff would have to be truly enormous for the expected utility to be positive.
So I guess I’ll just go on posting disclaimers: Phil Goetz has an unusually terrible ability to figure out what I’m saying.
While you appear to be right about phil’s incorrect interpretation, I don’t think he meant any malice by it...however, you appear to me to have meant malice in return. So, I think your comment borders on unnecessary disrespect and if it were me who had made the comment, I would edit it to make the same point while sounding less hateful. If people disagree with me, please down vote this comment. (Though admittedly, if you edit your comment now, we won’t get good data, so you probably should leave it as is.)
I admit that I’m not factoring in your entire history with phil much so you may have further justification of which I’m unaware, but my view I would expect to be shared even more by casual readers who don’t know either of you well. Maybe in that case, a comment like yours is fine, but only if delivered privately.
Agreed. Also, saying somebody is wrong and then not bothering to explain how does come across as somewhat rude, as it forces the other person to try to guess what they did wrong instead of providing more constructive feedback.
Phil does this a lot, usually in ways which present me with the dilemma of spending a lot of time correcting him, or letting others pick up a poor idea of what my positions are (because people have a poor ability to discount this kind of evidence). I’ve said as much to Phil, and he apparently thinks it’s fine to go on doing this—that it’s good for him to force me to correct him, even though others don’t make similar misinterpretations. Whether or not this is done from conscious malice doesn’t change the fact that it’s a behavior that forces me to expend resources or suffer a penalty, which is game-theoretically a hostile act.
So, to discourage this unpleasant behavior, it seems to me that rather than scratching his itch for his benefit (encouraging repetition), I should make some reply which encourages him not to do it again.
I would like to just reply: “Phil Goetz repeatedly misinterprets what I’m saying in an attempt to force me to correct him, which I consider very annoying behavior and have asked him to stop.” If that’s not what Phil intends.… well, see how it feels to be misinterpreted, Phil? Unfortunately this comes too close to lying for my tastes, so I’ll have to figure out some similar standard reply. Maybe even a standard comment to link to each time he does this.
Ok, I soften my critique given your reply which made a point I hadn’t fully considered.
It sounds like the public disrespect is intentional, and it does have a purpose.. To be a good thing to do, you need to believe, among other things:
Publicly doing that is more likely to make him stop relative to privately doing it. (Seems plausible).
You’re not losing something greater than the wasted time by other people observing your doing it. (Unclear to me)
It would be better I think if you could just privately charge someone for the time wasted;but it does seem unlikely phil would agree to that. I think your suggestion of linking to a fairly respectful but forceful reply works pretty well for the time being.
Sure. And my standard reply will be, “Eliezer repeatedly claims that I’m misinterpreting him in order to avoid addressing inconsistencies or ambiguities in what he has said.”
You’re doing it again.
Er, did you misparse? I think you read
Eliezer repeatedly claims that I’m (misinterpreting him in order to avoid addressing inconsistencies or ambiguities in what he has said)
I thnk he meant
Eliezer repeatedly claims (that I’m misinterpreting him) in order to avoid addressing inconsistencies or ambiguities in what he has said
I have to say, I disagree with much of what he says but PhilGoetz has never struck me as one of the site’s ne’er-do-wells.
You may not have noticed that I was accusing you of being insightful.
I’m trying to be sensitive to your issues about this. So how would you have suggested that I phrase my comment? I said, “This is what Eliezer seems to be saying”, and asked if that was what you were saying. I don’t know what you want. You seem to be saying (and I have to say things like this, because in order to have a conversation with someone you have to try to figure out what they mean) “Shut up, Phil.”
In this case, when I said you seemed to be saying that rational decision-making about playing the lottery does not mean maximizing expected utility, I was just being polite. You said it. I quote:
This says that the chance of winning the lottery is so low that you don’t need to do an expected utility calculation. I will not back down and pretend that I might be misinterpreting you in this instance. Maybe you meant to say something different, but this is what you said.
You’re tired of me trying to interpret what you say? Well, I’m tired of you trying to disclaim or ignore the logical consequences of what you say.
Eli tends to say stylistically: “You will not ” for what others, when they’re thinking formally, express as “You very probably will not __” This is only a language confusion between speakers. There are other related ones here, I’ll link to them later. Telling someone to “win” versus “try to win” is a very similar issue.
To be exact, I say this when human brains undergo the failure mode of being unable to discount small probabilities. Vide: “But there’s still a chance, right?”
That’s not what’s at issue. The statement still says that the chance of winning is so low as not to be worth talking about. That implies that one does not calculate expected utility. My interpretation is correct. Eliezer has written 3 comments in reply, and is still trying to present it as if what is at issue here is that I consistently misrepresent him.
I am not misrepresenting him. My interpretation is correct. As has probably often been the case.
“That implies that one does not calculate expected utility.”
My impression has been that Eliezer means X and writes “Y” where Y could be interpreted to mean that Eliezer means either X or Z, you say “Eliezer means Z which implies this other obviously wrong thing”, and then Eliezer becomes upset because you have misinterpreted him and you become upset because he is ignoring your noting of the ambiguity of “Y”. Then hilarity is spawned.
A data point for ya.
Ambiguities can simply be asked. I might or might not answer depending on whether I had time. Speaking for a person is a different matter.
The comment that started this now-tedious thread said:
Sounds like asking to me. I clearly was not claiming to know what you were thinking.
Phil, I think you’re interpeting his claim too literally (relative to his intent). He is only trying to help people who have a psychological inability to discount small probabilities appropriately. Certainly if the lottery award grows high enough, standard decision theory implies you play ….this is one of the pascal’s mugging variants (similarly, whether to perform hypothetical exotic physics experiments with small probability of yielding infinite (or just extremely large) utility and large probability of destroying everything) which is not fully resolved for any of us, I think.
You’re probably right. But I’m still irritated that instead of EY saying, “I didn’t say exactly what I meant”, he is sticking to “Phil is stupid.”
If a gun were put to my head and I had to decide right now, I agree with your irritation. However, he did make an interesting point about public disrespect as a means of deterrence which deserves more thinking about. If that method looks promising after further inspection, we’d probably want to reconsider its application to this situation, though it’s still unclear to me to what extent it applies in this case.
There’s also the consideration of total time expenditures on my part. Since the main reason I don’t respond at length to Goetz is his repeated behaviors that force me to expend large amounts of time or suffer penalties, elaborate time-consuming courtesies aren’t a solution either.
Agreed
Yup.
Phil may be more likely to misinterpret you than the most prolific contributors, but he is probably less likely to misinterpret you than most most readers of LW. I understand that this may be beside the point to you. I empathize, and wish I could think of a solution.