If the opportunity to fantasize about sexual intercourse was a rational justification for masturbation, we could design a “Hands-Free Masturbation Machine”, where you buy such a machine once and get unlimited amounts of stimulation. You could then use your hands to do the washing-up or play pick-up sticks.
The point is that human values are complex. If you tell someone that donating money to the Society for Treating Rare Diseases in Cute Kittens is irrational, because they’ll save more beings by helping to spawn a positive Singularity, you reduce the whole activity to helping beings. It’s not just about helping beings, it is also about feeling good and helping certain beings in a certain way at a certain time.
The same can be said about lotteries. Playing the lottery is not something that can be optimized, because you don’t care about that. You just want to play the lottery and feel good about it.
What Eliezer Yudkowsky is promoting is in essence wireheading. Replacing a complex activity with some sort of optimized substitution that gives you the same differently. But this implicitly assumes that the activity is just a means to an end. Many human activities are neither instrumental nor terminal. We do what we do because we want to do so, not as an instrumental activity that can be optimized.
The whole expected utility maximization idea is completely inhuman. Humans want to experience utility by giving in to their desires and not optimize away their complex values in favor of increasing some abstract notion of expected reward.
Eliezer’s point is that if you think you should play the lottery then you are wrong about your own values, you don’t just have weird values. (And he’s helpfully correcting you.)
If you want fuzzies more than to save beings, your current values are “fuzzies (important)”, “saving beings (not important)”, and (likely unknown to you) “change the weight of ‘saving beings to ‘super duper important’ (super duper important)”. (Either that or you’re a serious jerk.)
I’m just not getting it. Do you really think people are never just stupid?
I’m just not getting it. Do you really think people are never just stupid?
No, people very often are just stupid. But I don’t approve of the way the posts on lotteries have been written. Lotteries are not intrinsically irrational, only if one is confused about what it means to play the lottery. What Elezier is basically saying is that “you ought not to play lotteries because it is stupid.” That’s a pretty weak argument.
...”change the weight of ‘saving beings to ‘super duper important’ (super duper important)”. (Either that or you’re a serious jerk.)
I haven’t read the metaethics sequence, so maybe he has figured out some objective right that makes caring about beings more important than caring about oneself. I doubt it though, and I don’t think that it has been proven that humans are not selfish.
For me winning means to do what I want, the way I want it, when I want it. And I never regret anything, because at one time it was exactly what I wanted.
Eliezer’s point is that if you think you should play the lottery then you are wrong about your own values...
I don’t buy the general point of being wrong about one’s own values. I am not the same person as one that was smarter, knew more and had unlimited resources to think about decisions.
If I adopt game and decision theoretic models, I discard my current values and replace them with some sort of equilibrium, between me and other agents that adopted the same strategies. But I don’t want to play that game, I don’t care. I care about my current values, not what I would do if I was able to run models of all other agents and extrapolate their values and strategies.
If you asked a Cro-Magnon man about its goals and desires it would likely mention mating and hunting. Sure, if the Cro-Magnon was smarter and knew more, it would maybe care how to turn a sphere inside out. And if it knew even more? Where does this line of reasoning lead us?
Rationality is instrumental, not a goal in and of itself. Rationality can’t tell me what to value or how to allocate utility to my goals. If I want to cooperate when faced with the Prisoner’s dilemma, then that is what I want.
What. This goes completely against what I thought was a common human experience of following the stern moral obligation of kicking out your bisexual son even though it tears your heart out, discovering to your own surprise and horror that Leviticus shouldn’t dictate your values, and following the stern moral obligation of helping your son’s boyfriend in a lurch even though it disgusts you and costs you your church’s love. (And, yes, keeping this up even if you never derive the slightest satisfaction, if it kills you, and if you forget all about it the instant you’ve chosen.) Either you’re mistaken, or you’re very weird, or I and some disproportionately famous people are weird.
If you asked a Cro-Magnon man about its goals and desires it would likely mention mating and hunting. Sure, if the Cro-Magnon was smarter and knew more, it would maybe care how to turn a sphere inside out. And if it knew even more? Where does this line of reasoning lead us?
Why “it” rather than “he”? That’s confusing, for me at least.
Eliezer’s point is that if you think you should play the lottery then you are wrong about your own values, you don’t just have weird values.
I thought I already explained what was wrong with that here:
Re: Seriously, why can’t we just say that buying lottery tickets is stupid?
Buying a lottery ticket is not stupid—under some conditions.
Say you have two cents, and can’t afford your train fare home (which is one stop away). If you can gamble those two cents in a game of chance, you may be able to convert them into a whole train fare.
The conditions of being stuffed—unless you have a lot of money—may not be that uncommon: so many people may be inclined to gamble this way.
I just… I don’t… Why do people feel the need to do this on Eliezer’s posts so much? Why can he not make a single statement without somebody finding some obscure, improbable, irrelevent exception, and loudly trumpeting it? Going through the sequence reruns, it’s appalling how many people in the comments from the OB days just seem to be wilfully missing the point for the sake of generating a defensible disagreement.
Is it a desire to demonstrate how clever they are by being contrary, even if the disagreement is over some wholly irrelevent nitpick? Do they really think that the existence of imaginable but implausible exceptions is important? Is it just extreme, unrestrained pedantry?
All it does is pollute the discussion. If you don’t believe the exceptions to the “buying lottery tickets is stupid” rule are common enough to be significant, and you don’t believe that Eliezer thinks so either, and you don’t believe anybody reading the post is going to be adversely affected by Eliezer’s failure to explicitly mention these contrived exceptions to the rule, then why even bring it up?
ETA: And if you do believe any of those things, why?
Seriously, why can’t we just say that buying lottery tickets is stupid?
The answer is: because that would be a silly over-generalisation. Gambling is sometimes a rational course of action. It is good for people to be aware of that—in case they find themselves needing to gamble—a common circumstance—especially for males.
Roulette is often better than buying lottery tickets for large sums—due to taxation issues. However, in some countries, there’s a government-run lottery and many other forms of gambling are illegal.
Can you confidently assert (p > 0.8) that, since the advent of modern lotteries, at least a thousand people have arrived independently in circumstances under which buying lottery tickets was a non-stupid action?
Can you confidently assert (p > 0.8) that, since the advent of modern lotteries, at least a thousand people have arrived independently in circumstances under which buying lottery tickets was a non-stupid action?
That sounds reasonable to me—though it is not what I claimed. For instance, if someone in authority tells you to buy a lottery ticket for them.
This is not in fact the situation of most lottery players. When you’re lower-middle class (and not in need of expensive treatment or fighting x-risk), you want low variance, because you’re slightly above some bad thresholds like homelessness. It makes sense to gamble if you’re dirt poor, though.
Source is just talking to people who buy lottery tickets, so N is small. Do you have more data?
This is not in fact the situation of most lottery players.
I never said it was.
Do you have more data?
More data—about what? Surely my comment simply stated the totally obvious—that sometimes it pays to gamble. You apparently agree with this point in your reply. So—why do you think this thesis requires “more data”? What aspect of it do you think requires additional support?
The conditions of being stuffed—unless you have a lot of money—may not be that uncommon
that you thought a sizeable proportion of lottery players were in this situation. Apologies for Gricean failure.
So, do we agree that while there are odd situations where humans should play the lottery (and odd minds that value tiling the universe with lottery tickets), people who play the lottery are in fact being stupid? (Or maybe you’re agnostic with respect to their stupidity? If you do, then this requires more data, given perceived incidence of short-on-train-fare situation.)
If the opportunity to fantasize about sexual intercourse was a rational justification for masturbation, we could design a “Hands-Free Masturbation Machine”, where you buy such a machine once and get unlimited amounts of stimulation. You could then use your hands to do the washing-up or play pick-up sticks.
It exists. I won’t link to it. (Also, few people are that good at multitasking.)
The point is that human values are complex. If you tell someone that donating money to the Society for Treating Rare Diseases in Cute Kittens is irrational, because they’ll save more beings by helping to spawn a positive Singularity, you reduce the whole activity to helping beings. It’s not just about helping beings, it is also about feeling good and helping certain beings in a certain way at a certain time.
The same can be said about lotteries. Playing the lottery is not something that can be optimized, because you don’t care about that. You just want to play the lottery and feel good about it.
What Eliezer Yudkowsky is promoting is in essence wireheading. Replacing a complex activity with some sort of optimized substitution that gives you the same differently. But this implicitly assumes that the activity is just a means to an end. Many human activities are neither instrumental nor terminal. We do what we do because we want to do so, not as an instrumental activity that can be optimized.
The whole expected utility maximization idea is completely inhuman. Humans want to experience utility by giving in to their desires and not optimize away their complex values in favor of increasing some abstract notion of expected reward.
Eliezer’s point is that if you think you should play the lottery then you are wrong about your own values, you don’t just have weird values. (And he’s helpfully correcting you.)
If you want fuzzies more than to save beings, your current values are “fuzzies (important)”, “saving beings (not important)”, and (likely unknown to you) “change the weight of ‘saving beings to ‘super duper important’ (super duper important)”. (Either that or you’re a serious jerk.)
I’m just not getting it. Do you really think people are never just stupid?
No, people very often are just stupid. But I don’t approve of the way the posts on lotteries have been written. Lotteries are not intrinsically irrational, only if one is confused about what it means to play the lottery. What Elezier is basically saying is that “you ought not to play lotteries because it is stupid.” That’s a pretty weak argument.
I haven’t read the metaethics sequence, so maybe he has figured out some objective right that makes caring about beings more important than caring about oneself. I doubt it though, and I don’t think that it has been proven that humans are not selfish.
For me winning means to do what I want, the way I want it, when I want it. And I never regret anything, because at one time it was exactly what I wanted.
I don’t buy the general point of being wrong about one’s own values. I am not the same person as one that was smarter, knew more and had unlimited resources to think about decisions.
If I adopt game and decision theoretic models, I discard my current values and replace them with some sort of equilibrium, between me and other agents that adopted the same strategies. But I don’t want to play that game, I don’t care. I care about my current values, not what I would do if I was able to run models of all other agents and extrapolate their values and strategies.
If you asked a Cro-Magnon man about its goals and desires it would likely mention mating and hunting. Sure, if the Cro-Magnon was smarter and knew more, it would maybe care how to turn a sphere inside out. And if it knew even more? Where does this line of reasoning lead us?
Rationality is instrumental, not a goal in and of itself. Rationality can’t tell me what to value or how to allocate utility to my goals. If I want to cooperate when faced with the Prisoner’s dilemma, then that is what I want.
What. This goes completely against what I thought was a common human experience of following the stern moral obligation of kicking out your bisexual son even though it tears your heart out, discovering to your own surprise and horror that Leviticus shouldn’t dictate your values, and following the stern moral obligation of helping your son’s boyfriend in a lurch even though it disgusts you and costs you your church’s love. (And, yes, keeping this up even if you never derive the slightest satisfaction, if it kills you, and if you forget all about it the instant you’ve chosen.) Either you’re mistaken, or you’re very weird, or I and some disproportionately famous people are weird.
Why “it” rather than “he”? That’s confusing, for me at least.
I thought I already explained what was wrong with that here:
I just… I don’t… Why do people feel the need to do this on Eliezer’s posts so much? Why can he not make a single statement without somebody finding some obscure, improbable, irrelevent exception, and loudly trumpeting it? Going through the sequence reruns, it’s appalling how many people in the comments from the OB days just seem to be wilfully missing the point for the sake of generating a defensible disagreement.
Is it a desire to demonstrate how clever they are by being contrary, even if the disagreement is over some wholly irrelevent nitpick? Do they really think that the existence of imaginable but implausible exceptions is important? Is it just extreme, unrestrained pedantry?
All it does is pollute the discussion. If you don’t believe the exceptions to the “buying lottery tickets is stupid” rule are common enough to be significant, and you don’t believe that Eliezer thinks so either, and you don’t believe anybody reading the post is going to be adversely affected by Eliezer’s failure to explicitly mention these contrived exceptions to the rule, then why even bring it up?
ETA: And if you do believe any of those things, why?
I generally agree with your post, but this phrasing is too strong.
It’s a net bad, but there are good consequences.
The original post asks:
The answer is: because that would be a silly over-generalisation. Gambling is sometimes a rational course of action. It is good for people to be aware of that—in case they find themselves needing to gamble—a common circumstance—especially for males.
Roulette is often better than buying lottery tickets for large sums—due to taxation issues. However, in some countries, there’s a government-run lottery and many other forms of gambling are illegal.
Note that the economist Robin Hanson made exactly the same point as me here.
Can you confidently assert (p > 0.8) that, since the advent of modern lotteries, at least a thousand people have arrived independently in circumstances under which buying lottery tickets was a non-stupid action?
What kind of circumstances were they?
That sounds reasonable to me—though it is not what I claimed. For instance, if someone in authority tells you to buy a lottery ticket for them.
This is not in fact the situation of most lottery players. When you’re lower-middle class (and not in need of expensive treatment or fighting x-risk), you want low variance, because you’re slightly above some bad thresholds like homelessness. It makes sense to gamble if you’re dirt poor, though.
Source is just talking to people who buy lottery tickets, so N is small. Do you have more data?
I never said it was.
More data—about what? Surely my comment simply stated the totally obvious—that sometimes it pays to gamble. You apparently agree with this point in your reply. So—why do you think this thesis requires “more data”? What aspect of it do you think requires additional support?
Oh, okay. I extrapolated from
that you thought a sizeable proportion of lottery players were in this situation. Apologies for Gricean failure.
So, do we agree that while there are odd situations where humans should play the lottery (and odd minds that value tiling the universe with lottery tickets), people who play the lottery are in fact being stupid? (Or maybe you’re agnostic with respect to their stupidity? If you do, then this requires more data, given perceived incidence of short-on-train-fare situation.)
Not really, IMO. You can model any agent in a utility maximization framework.
That’s one of the results in this paper.