The odds that Joan Ginther would hit four Texas Lottery jackpots for a combined $21 million are astronomical. Mathematicians say the chances are as slim as 1 in 18 septillion — that’s 18 and 24 zeros.
I haven’t checked this calculation at all, but I’m confident that it’s wrong, for the simple reason that it is far more likely that some “mathematician” gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?
It seems right to me. If the chance of one ticket winning is one in 10^6, the chance of four specified tickets winning four drawings is one in 10^24.
Of course, the chances of “Person X winning the lottery week 1 AND Person Y winning the lottery week 2 AND Person Z winning the lottery week 3 AND Person W winning the lottery week 4” are also 10^24, and this happens every four weeks.
From the article (there is a near invisible more text button)
Calculating the actual odds of Ginther hitting four multimillion-dollar lottery jackpots is tricky. If Ginther’s winning tickets were the only four she ever bought, the odds would be one in 18 septillion, according to Sandy Norman and Eduardo Duenez, math professors at the University of Texas at San Antonio.
And she was the only person ever to have bought 4 tickets (birthday paradoxes and all)…
I find this, from the original msnbc article, depressing
After all, the only way to win is to keep playing. Ginther is smart enough to know that’s how you beat the odds: she earned her doctorate from Stanford University in 1976, then spent a decade on faculty at several colleges in California.
I find this, from the original msnbc article, depressing
Is it depressing because someone with a Ph.D. in math is playing the lottery, or depressing because she must’ve have figured out something we don’t know, given that she’s won four times?
The former. It is also depressing because it can be used in articles on the lottery in the following way, “See look at this person good at maths, playing the lottery, that must mean it is a smart thing to play the lottery”.
Depressing because someone with a Ph.D. in math is playing the lottery. I don’t see any reason to think she figured out some way of beating the lottery.
After all, the only way to win is to keep playing. Ginther is smart enough to know that’s how you beat the odds: she earned her doctorate from Stanford University in 1976, then spent a decade on faculty at several colleges in California.
I haven’t checked this calculation at all, but I’m confident that it’s wrong, for the simple reason that it is far more likely that some “mathematician” gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth.
Hm. Have you looked at the multiverse lately? It’s pretty apparent that something has gone horribly weird somewhere along the way. Your confidence should be limited by that dissonance.
It’s the same with MWI, and cryonics, and moral cognitivism, and any other belief where your structural uncertainty hasn’t been explicitly conditioned on your anthropic surprise. I’m not sure to what extent your implied confidence in these matters is pedagogical rather than indicative of your true beliefs. I expect mostly pedagogical? That’s probably fine and good, but I doubt such subtle epistemic manipulation for the public good is much better than the Dark Arts.
(Added: In this particular case, something less metaphysical is probably amiss, like a math error.)
So let me try to rewrite that (and don’t be afraid to call this word salad):
(Note: the following comment is based on premises which are very probably completely unsound and unusually prone to bias. Read at your own caution and remember the distinction between impressions and beliefs. These are my impressions.)
You’re Eliezer Yudkowsky. You live in a not-too-far-from-a-Singularity world, and a Singularity is a BIG event, decision theoretically and fun theoretically speaking. Isn’t it odd that you find yourself at this time and place given all the people you could have found yourself as in your reference class? Isn’t that unsettling? Now, if you look out at the stars and galaxies and seemingly infinite space (though you can’t see that far), it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive. And yet you find yourself as Eliezer Yudkowsky (staring at a personal computer, no less) in a close-to-Singularity world: surely some extra parameters must have been thrown into the description of this universe; surely your experience is not best described with a universal prior alone, instead of a universal prior plus some mixture of agents computing things according to their preference. In other words, this universe looks conspicuously like it has been optimized around Eliezer-does-something-multiversally-important. (I suppose this should also up your probability that you’re a delusional narcissist, but there’s not much to do about that.)
Now, if such optimization pressures exist, then one has to question some reductionist assumptions: if this universe gets at least some of its measure from the preferences of simulator-agents, then what features of the universe would be affected by those preferences? Computational cost is one. MWI implies a really big universe, and what are the chances that you would find yourself where you are in a really big universe as well as finding yourself in a conspicuously-optimized-seeming universe? Seemingly the two hypotheses are at odds. And what about cryonics? Do you really expect to die in a universe that seems to be optimized for having you around doing interesting things? (The answer to that could very well be yes, especially if your name is Light.) And when you have simulators in the picture, with explicit values, perhaps they have encoded rightness and wrongness into the fabric of reality via selectively pruning multiverse branches or something. Heaven knows what the gods do for fun.
These are of course ridiculous ideas, but ridiculous ideas that I am nonetheless hesitant to assign negligible probability to.
Maybe you’re a lot less surprised to find yourself in this universe than I am, in which case none of my arguments apply. But I get the feeling that something is awfully odd is going on, and this makes me hesitant to be confident about some seemingly basic reductionist conclusions. Thus I advise you to buy a lottery ticket. It’s the rational thing to do.
(Note: Although I personalized this for Eliezer, it applies to pretty much everyone to a greater or lesser degree. I remember (perhaps a secondhand and false memory, though, so don’t take it too seriously) at some point Michael Vassar was really confused about why he didn’t find himself as Eliezer Yudkowsky. I think the answer I would have thought up if I was him is that Michael Vassar is more decision theoretically multiversally important than Eliezer. Any other answer makes the question appear silly. Which it might be.)
(Alert to potential bias: I kinda like to be the contrarian-contrarian. Cryonics is dumb, MWI is wrong, buying a lottery ticket is a good idea, moral realism is a decent hypothesis, anthropic reasoning is more important than reductionist reasoning, CEV-like things won’t ever work and are ridiculously easy to hack, TDT is unlikely to lead to any sort of game theoretic advantage and precommitments not to negotiate with blackmailers are fundamentally doomed, winning timeless war is more important than facilitating timeless trade, the Singularity is really near, religion is currently instrumentally rational for almost everyone, most altruists are actually egoists with relatively loose boundaries around identity, et cetera, et cetera.)
More seriously, that aphorism begs the question. Yes, your hypothesis and your evidence have to be in perfectly balanced alignment. That is, from a Bayesian perspective, tautological. However, it doesn’t help you figure out how it is exactly that the adding gets done. It doesn’t help distinguish between hypotheses. For that we need Solomonoff’s lightsaber. I don’t see how saying “it (whatever ‘it’ is) adds up to (whatever ‘adding up to’ means) normality (which I think should be ‘reality’)” is at all helpful. Reality is reality? Evidence shouldn’t contradict itself? Cool story bro, but how does that help me?
it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive.
This is rather tangential to your point, but the universe looks very computationally cheap to me. In terms of the whole ensemble, quantum mechanics is quite cheap. It only looks expensive to us because we measure by a classical slice, which is much smaller. But even if we call it exponential, that is very quick by the standards of the Solomonoff prior.
Hm, I’m not sure I follow: both a classical and quantum universe are cheap, yes, but if you’re using a speed prior or any prior that takes into account computational expense, then it’s the cost of the universes relative to each other that helps us distinguish between which universe we expect to find ourselves in, not their cost relative to all possible universes.
I could very, very well just be confused.
Added: Ah, sorry, I think I missed your point. You’re saying that even infinitely large universes seem computationally cheap in the scheme of things? I mean, compared to all possible programs in which you would expect life to evolve, the universe looks hugeeeeeee to me. It looks infinite, and there are tons of finite computations… when you compare anything to the multiverse of all things, that computation looks cheap. I guess we’re just using different scales of comparison: I’m comparing to finite computations, you’re comparing to a multiverse.
No, that’s not what I meant; I probably meant something silly in the details, but I think the main point still applies. I think you’re saying that the size of the universe is large compared to the laws of physics. To which I still reply: not large by the standards of computable functions.
Hm. It’s unfortunate that I need to pass all of my ideas through a Nick Tarleton or a Steve Rayhawk before they’re fit for general consumption. I’ll try to rewrite that whole comment when I’m less tired.
Consider that one of my two posts, Abnormal Cryonics, was simply a narrower version of what I wrote above (structural uncertainty is highly underestimated) and that Nick Tarleton wrote about a third of that post. He understood what I meant and was able to convey it better than I could. Also, Nick Tarleton is quick to call bullshit if something I’m saying doesn’t seem to be meaningful, which is a wonderful trait.
Thanks! But it seems you’re being needlessly abrasive about it. Perhaps it’s a cultural thing? Anyway, did you read the expanded version of my comment? I tried to be clearer in my explanation there, but it’s hard to convey philosophical intuitions.
The problem with that idea is that there is no default level of belief. You are not allowed to say
These are of course ridiculous ideas, but ridiculous ideas that I am nonetheless hesitant to assign negligible probability to.
What is the difference between hesitating to assign negligible probability vs. to assign non-negligible probability? Which way is the certainty, which way is doubt? If you don’t have good understanding of why you should believe one way or the other, you can’t appoint a direction where safe level of credence lies and stay there pending the enlightenment.
Your argument is not strong enough to shift the belief of one in septillion up to something believable, but your argument must be that strong to do it. You can’t appeal to being hesitant to believe otherwise, it’s not a strong argument, but a statement about not having one.
Was your point that Eliezer’s Everett Branch is weird enough already that it shouldn’t be that surprising if universally improbable things have occurred?
From a recent newspaper story:
http://www.msnbc.msn.com/id/38229644/ns/us_news-life
I haven’t checked this calculation at all, but I’m confident that it’s wrong, for the simple reason that it is far more likely that some “mathematician” gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?
It seems right to me. If the chance of one ticket winning is one in 10^6, the chance of four specified tickets winning four drawings is one in 10^24.
Of course, the chances of “Person X winning the lottery week 1 AND Person Y winning the lottery week 2 AND Person Z winning the lottery week 3 AND Person W winning the lottery week 4” are also 10^24, and this happens every four weeks.
From the article (there is a near invisible more text button)
And she was the only person ever to have bought 4 tickets (birthday paradoxes and all)…
I did see an analysis of this somewhere, I’ll try and dig it up. Here it is. There is hackernews commentary here.
I find this, from the original msnbc article, depressing
Is it depressing because someone with a Ph.D. in math is playing the lottery, or depressing because she must’ve have figured out something we don’t know, given that she’s won four times?
The former. It is also depressing because it can be used in articles on the lottery in the following way, “See look at this person good at maths, playing the lottery, that must mean it is a smart thing to play the lottery”.
Depressing because someone with a Ph.D. in math is playing the lottery. I don’t see any reason to think she figured out some way of beating the lottery.
It’s also far more likely that she cheated. Or that there is a conspiracy in the Lottery to make she win four times.
The most eyebrow-raising part of that article:
Hm. Have you looked at the multiverse lately? It’s pretty apparent that something has gone horribly weird somewhere along the way. Your confidence should be limited by that dissonance.
It’s the same with MWI, and cryonics, and moral cognitivism, and any other belief where your structural uncertainty hasn’t been explicitly conditioned on your anthropic surprise. I’m not sure to what extent your implied confidence in these matters is pedagogical rather than indicative of your true beliefs. I expect mostly pedagogical? That’s probably fine and good, but I doubt such subtle epistemic manipulation for the public good is much better than the Dark Arts.
(Added: In this particular case, something less metaphysical is probably amiss, like a math error.)
So let me try to rewrite that (and don’t be afraid to call this word salad):
(Note: the following comment is based on premises which are very probably completely unsound and unusually prone to bias. Read at your own caution and remember the distinction between impressions and beliefs. These are my impressions.)
You’re Eliezer Yudkowsky. You live in a not-too-far-from-a-Singularity world, and a Singularity is a BIG event, decision theoretically and fun theoretically speaking. Isn’t it odd that you find yourself at this time and place given all the people you could have found yourself as in your reference class? Isn’t that unsettling? Now, if you look out at the stars and galaxies and seemingly infinite space (though you can’t see that far), it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive. And yet you find yourself as Eliezer Yudkowsky (staring at a personal computer, no less) in a close-to-Singularity world: surely some extra parameters must have been thrown into the description of this universe; surely your experience is not best described with a universal prior alone, instead of a universal prior plus some mixture of agents computing things according to their preference. In other words, this universe looks conspicuously like it has been optimized around Eliezer-does-something-multiversally-important. (I suppose this should also up your probability that you’re a delusional narcissist, but there’s not much to do about that.)
Now, if such optimization pressures exist, then one has to question some reductionist assumptions: if this universe gets at least some of its measure from the preferences of simulator-agents, then what features of the universe would be affected by those preferences? Computational cost is one. MWI implies a really big universe, and what are the chances that you would find yourself where you are in a really big universe as well as finding yourself in a conspicuously-optimized-seeming universe? Seemingly the two hypotheses are at odds. And what about cryonics? Do you really expect to die in a universe that seems to be optimized for having you around doing interesting things? (The answer to that could very well be yes, especially if your name is Light.) And when you have simulators in the picture, with explicit values, perhaps they have encoded rightness and wrongness into the fabric of reality via selectively pruning multiverse branches or something. Heaven knows what the gods do for fun.
These are of course ridiculous ideas, but ridiculous ideas that I am nonetheless hesitant to assign negligible probability to.
Maybe you’re a lot less surprised to find yourself in this universe than I am, in which case none of my arguments apply. But I get the feeling that something is awfully odd is going on, and this makes me hesitant to be confident about some seemingly basic reductionist conclusions. Thus I advise you to buy a lottery ticket. It’s the rational thing to do.
(Note: Although I personalized this for Eliezer, it applies to pretty much everyone to a greater or lesser degree. I remember (perhaps a secondhand and false memory, though, so don’t take it too seriously) at some point Michael Vassar was really confused about why he didn’t find himself as Eliezer Yudkowsky. I think the answer I would have thought up if I was him is that Michael Vassar is more decision theoretically multiversally important than Eliezer. Any other answer makes the question appear silly. Which it might be.)
(Alert to potential bias: I kinda like to be the contrarian-contrarian. Cryonics is dumb, MWI is wrong, buying a lottery ticket is a good idea, moral realism is a decent hypothesis, anthropic reasoning is more important than reductionist reasoning, CEV-like things won’t ever work and are ridiculously easy to hack, TDT is unlikely to lead to any sort of game theoretic advantage and precommitments not to negotiate with blackmailers are fundamentally doomed, winning timeless war is more important than facilitating timeless trade, the Singularity is really near, religion is currently instrumentally rational for almost everyone, most altruists are actually egoists with relatively loose boundaries around identity, et cetera, et cetera.)
It all adds up to normality, damn it!
What whats to what?
More seriously, that aphorism begs the question. Yes, your hypothesis and your evidence have to be in perfectly balanced alignment. That is, from a Bayesian perspective, tautological. However, it doesn’t help you figure out how it is exactly that the adding gets done. It doesn’t help distinguish between hypotheses. For that we need Solomonoff’s lightsaber. I don’t see how saying “it (whatever ‘it’ is) adds up to (whatever ‘adding up to’ means) normality (which I think should be ‘reality’)” is at all helpful. Reality is reality? Evidence shouldn’t contradict itself? Cool story bro, but how does that help me?
http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/22cp?c=1
This is rather tangential to your point, but the universe looks very computationally cheap to me. In terms of the whole ensemble, quantum mechanics is quite cheap. It only looks expensive to us because we measure by a classical slice, which is much smaller. But even if we call it exponential, that is very quick by the standards of the Solomonoff prior.
Hm, I’m not sure I follow: both a classical and quantum universe are cheap, yes, but if you’re using a speed prior or any prior that takes into account computational expense, then it’s the cost of the universes relative to each other that helps us distinguish between which universe we expect to find ourselves in, not their cost relative to all possible universes.
I could very, very well just be confused.
Added: Ah, sorry, I think I missed your point. You’re saying that even infinitely large universes seem computationally cheap in the scheme of things? I mean, compared to all possible programs in which you would expect life to evolve, the universe looks hugeeeeeee to me. It looks infinite, and there are tons of finite computations… when you compare anything to the multiverse of all things, that computation looks cheap. I guess we’re just using different scales of comparison: I’m comparing to finite computations, you’re comparing to a multiverse.
No, that’s not what I meant; I probably meant something silly in the details, but I think the main point still applies. I think you’re saying that the size of the universe is large compared to the laws of physics. To which I still reply: not large by the standards of computable functions.
Whowha?
Er, sorry, I’m guessing my comment came across as word salad?
Added: Rephrased and expanded and polemicized my original comment in a reply to my original comment.
Yeah I didn’t get it either.
Hm. It’s unfortunate that I need to pass all of my ideas through a Nick Tarleton or a Steve Rayhawk before they’re fit for general consumption. I’ll try to rewrite that whole comment when I’m less tired.
Illusion of transparency: they can probably generate sense in response to anything, but it’s not necessarily faithful translation of what you say.
Consider that one of my two posts, Abnormal Cryonics, was simply a narrower version of what I wrote above (structural uncertainty is highly underestimated) and that Nick Tarleton wrote about a third of that post. He understood what I meant and was able to convey it better than I could. Also, Nick Tarleton is quick to call bullshit if something I’m saying doesn’t seem to be meaningful, which is a wonderful trait.
Well, that was me calling bullshit.
Thanks! But it seems you’re being needlessly abrasive about it. Perhaps it’s a cultural thing? Anyway, did you read the expanded version of my comment? I tried to be clearer in my explanation there, but it’s hard to convey philosophical intuitions.
I find myself unable to clearly articulate what’s wrong with your idea, but in my own words, it reads as follows:
“One should believe certain things to be probable because those are the kinds of things that people believe through magical thinking.”
The problem with that idea is that there is no default level of belief. You are not allowed to say
What is the difference between hesitating to assign negligible probability vs. to assign non-negligible probability? Which way is the certainty, which way is doubt? If you don’t have good understanding of why you should believe one way or the other, you can’t appoint a direction where safe level of credence lies and stay there pending the enlightenment.
Your argument is not strong enough to shift the belief of one in septillion up to something believable, but your argument must be that strong to do it. You can’t appeal to being hesitant to believe otherwise, it’s not a strong argument, but a statement about not having one.
Was your point that Eliezer’s Everett Branch is weird enough already that it shouldn’t be that surprising if universally improbable things have occurred?
Erm, uh, kinda, in a more general sense. See my reply to my own comment where I try to be more expository.
I’m afraid it is word salad.