Actually, no, improper priors such as you suggest are not part of the foundations of Bayesian probability theory. It’s only legitimate to use an improper prior if the result you get is the limit of the results you get from a sequence of progressively more diffuse priors that tend to the improper prior in the limit. The Marginalization Paradox is an example where just plugging in an improper prior without considering the limiting process leads to an apparent contradiction. My analysis (http://ksvanhorn.com/bayes/Papers/mp.pdf) is that the problem there ultimately stems from non-uniform convergence.
I’ve had some email discussions with Scott Aaronson, and my conclusion is that the Dice Room scenario really isn’t an appropriate metaphor for the question of human extinction. There are no anthropic considerations in the Dice Room, and the existence of a larger population from which the kidnap victims are taken introduces complications that have no counterpart when discussing the human extinction scenario.
You could formalize the human extinction scenario with unrealistic parameters for growth and generational risk as follows:
Let n be the number of generations for which humanity survives.
The population in each generation is 10 times as large as the previous generation.
There is a risk 1⁄36 of extinction in each generation. Hence, P(n=N+1 | n >= n) = 1⁄36.
You are a randomly chosen individual from the entirety of all humans who will ever exist. Specifically, P(you belong to generation g) = 10^g / N, where N is the sum of 10^t for 1 ⇐ t ⇐ n.
Analyzing this problem, I get
P(extinction occurs in generation t | extinction no earlier than generation t) = 1⁄36
P(extinction occurs in generation t | you are in generation t) = about 9⁄10
That’s a vast difference depending on whether or not we take into account anthropic considerations.
The Dice Room analogy would be if the madman first rolled the dice until he got snake-eyes, then went out and kidnapped a bunch of people, randomly divided them into n batches, each 10 times larger than the previous, and murdered the last batch. This is a different process than what is described in the book, and results in different answers.
Fundamental or not I think my point still stands that “the prior is infinite so the whole thing’s wrong” isn’t quite enough of an argument, since you still seem to conclude that improper priors can be used if used carefully enough. A more satisfying argument would be to demonstrate that the 9⁄10 case can’t be made without incorrect use of an improper prior. Though I guess it’s still showing where the problem most likely is which is helpful.
As far as being part of the foundations goes, I was just going by the fact that it’s in Jaynes, but you clearly know a lot more about this topic than I do. I would be interested to know your answer to the following questions though: “Can a state of ignorance be described without the use of improper priors (or something mathematically equivalent)?”, and “Can Bayesian probability be used as the foundation of rational thought without describing states of ignorance?”.
On the Doomsday argument, I would only take the Dice Room as a metaphor not a proof of anything, but it does help me realise a couple of things. One is that the setup you describe of a potentially endlessly exponentially growing population is not a reasonable model of reality (irrespective of the parameters themselves). The growth has to stop, or at least converge, at some point, even without a catastrophe.
It’s interesting that the answer changes if he rolls the dice first. I think ultimately the different answers to the Dice Room correspond to different ways of handling the infinite population correctly—i.e. taking limits of finite populations. For any finite population there needs to be an answer to “what does he do if he doesn’t roll snake-eyes in time?” and different choices, for all that you might expect them to disappear in the limit, lead to different answers.
If the dice having already being rolled is the best analogy for the Doomsday argument then it’s making quite particular statements about causality and free will.
Actually, no, improper priors such as you suggest are not part of the foundations of Bayesian probability theory. It’s only legitimate to use an improper prior if the result you get is the limit of the results you get from a sequence of progressively more diffuse priors that tend to the improper prior in the limit. The Marginalization Paradox is an example where just plugging in an improper prior without considering the limiting process leads to an apparent contradiction. My analysis (http://ksvanhorn.com/bayes/Papers/mp.pdf) is that the problem there ultimately stems from non-uniform convergence.
I’ve had some email discussions with Scott Aaronson, and my conclusion is that the Dice Room scenario really isn’t an appropriate metaphor for the question of human extinction. There are no anthropic considerations in the Dice Room, and the existence of a larger population from which the kidnap victims are taken introduces complications that have no counterpart when discussing the human extinction scenario.
You could formalize the human extinction scenario with unrealistic parameters for growth and generational risk as follows:
Let n be the number of generations for which humanity survives.
The population in each generation is 10 times as large as the previous generation.
There is a risk 1⁄36 of extinction in each generation. Hence, P(n=N+1 | n >= n) = 1⁄36.
You are a randomly chosen individual from the entirety of all humans who will ever exist. Specifically, P(you belong to generation g) = 10^g / N, where N is the sum of 10^t for 1 ⇐ t ⇐ n.
Analyzing this problem, I get
P(extinction occurs in generation t | extinction no earlier than generation t) = 1⁄36
P(extinction occurs in generation t | you are in generation t) = about 9⁄10
That’s a vast difference depending on whether or not we take into account anthropic considerations.
The Dice Room analogy would be if the madman first rolled the dice until he got snake-eyes, then went out and kidnapped a bunch of people, randomly divided them into n batches, each 10 times larger than the previous, and murdered the last batch. This is a different process than what is described in the book, and results in different answers.
Thanks, interesting reading.
Fundamental or not I think my point still stands that “the prior is infinite so the whole thing’s wrong” isn’t quite enough of an argument, since you still seem to conclude that improper priors can be used if used carefully enough. A more satisfying argument would be to demonstrate that the 9⁄10 case can’t be made without incorrect use of an improper prior. Though I guess it’s still showing where the problem most likely is which is helpful.
As far as being part of the foundations goes, I was just going by the fact that it’s in Jaynes, but you clearly know a lot more about this topic than I do. I would be interested to know your answer to the following questions though: “Can a state of ignorance be described without the use of improper priors (or something mathematically equivalent)?”, and “Can Bayesian probability be used as the foundation of rational thought without describing states of ignorance?”.
On the Doomsday argument, I would only take the Dice Room as a metaphor not a proof of anything, but it does help me realise a couple of things. One is that the setup you describe of a potentially endlessly exponentially growing population is not a reasonable model of reality (irrespective of the parameters themselves). The growth has to stop, or at least converge, at some point, even without a catastrophe.
It’s interesting that the answer changes if he rolls the dice first. I think ultimately the different answers to the Dice Room correspond to different ways of handling the infinite population correctly—i.e. taking limits of finite populations. For any finite population there needs to be an answer to “what does he do if he doesn’t roll snake-eyes in time?” and different choices, for all that you might expect them to disappear in the limit, lead to different answers.
If the dice having already being rolled is the best analogy for the Doomsday argument then it’s making quite particular statements about causality and free will.