It looks like your reasoning is incorrect. What your equations are really saying is “you’re more likely to live to year N if you build safety systems earlier.” That is, your year Y isn’t (just) the “present moment,” it’s “the year you build an asteroid deflector.” However, you do not show that, given that our existential risk can decrease dramatically in the future, we should expect to have a long future ahead of us.
Also:
The above argument demonstrates why the choice of ‘reference class’ matters. If the risk is constant per unit time, then the correct reference class is units of time. If the risk is constant per birth, then the correct reference class is births.
The risk isn’t what matters to reference class. The reference class does refer to some class with constant probability. But that probability is not the probability of existential risk. It is the probability that I am in some state, given some information about me. Unless that information is “I am about to die from an existential threat,” these probabilities are not the same and so the existential risk will not be constant over the reference class for you.
Is your first objection that, in my scenario, the decrease in lambda occurs in the present year, while Leslie assumes that the decrease in lambda will not occur until 150 years from now? That’s a fair issue to raise. In my book, I work through numerical examples in detail (using Poisson processes instead of binomial processes), including an example using plausible numbers based on Leslie’s own scenario, and I also identify more general mathematical formulas. But I will try to defend the basic ideas here.
Suppose I amend my scenario as follows: the asteroid destroyer will not be completed until 150 years from now.
Recall that the Doomsday Argument (DA) invokes the Self-Sampling Assumption (SSA) twice: once for a small N and once for a much larger N. Since the case of the larger N is simpler to deal with, I will consider that case first.
For the larger N, change the exponents in the last equation in my addendum from Y and N–Y to Y+150 and N–Y–150, respectively when N >= Y+150; when N < Y+150, also change each ‘r’ to a ‘q’. (Note also that today I corrected the last equation in my addendum, changing the final ‘q’ to an ‘r’) The same conclusion holds as before: given this very large N, it is very likely that we are near the beginning of the window of safety, contradicting SSA.
The case for the smaller N is more complicated, and depends on the relative values of q, r, N, and 150. In general, SSA will be much less wrong (no pun intended) for the small N. But recall that DA takes the ratio of the two applications of SSA. The fact that the application for large N is much more wrong than the application for small N makes the ratio very wrong.
The risk isn’t what matters to reference class.
I think I have shown that the risk is what determines the correct choice of reference class. But I do not claim to be the only one to have identified this fact. Gott (1994) makes a similar point. Willard Wells (Apocalypse When?: Calculating How Long the Human Race Will Survive. Berlin: Springer Praxis, 2009, p.37) makes the same point when he writes, “When bias nullifies the principle of statistical indifference, look for a different variable that spreads the risk evenly and thereby restores the principle.”
The reference class does refer to some class with constant probability.
I take it you mean that if there are N members in the reference class, then the Self-Sampling Assumption asserts that the probability P that the present member has rank Y is 1/N for all Y in [1,N].
But that probability is not the probability of existential risk.
Do you think I’m claiming that the existential risk q per unit in the reference class is equal to 1/N? Of course I am not claiming that; indeed, such a claim would be meaningless, since there is an entire range of possible values for N given any particular q. But I have shown that, for a binomial or Poisson process, constant q implies constant P, and vice versa. Therefore, if you invoke the Self-Sampling Assumption for a reference class for which q is not constant, you are asserting a contradiction. Conversely, if you want to use a uniform-distribution assumption akin to the Self-Sampling Assumption, it is the assessment of the risk that should determine the choice of reference class. (I cover the issue of the reference class in much depth in my book.) The failure to understand this fact is, in my judgment, the reason that Leslie and Bostrom have not been able to solve the problem of the reference class.
Is your first objection that, in my scenario, the decrease in lambda occurs in the present year, while Leslie assumes that the decrease in lambda will not occur until 150 years from now?
No. The trouble is that your argument about “greater when Y is closer to the beginning” hinges on imagining varying the value of Y—moving it around. Currently when you move around the year Y, you are moving not just the present year but also the year when asteroid defense is built. What I would like to see (among other things) is you moving around the present year without moving around when asteroid defense is built.
I appear to be using a different meaning of “self-sampling assumption” than you. Rather than worrying about it, do we agree that when used correctly it’s just an expression of underlying statistics? Then we could just talk in terms of statistics and not have to worry about the definition.
Of course, what is moving and what is fixed depends on the point of reference. In my analysis, I take the present as the fixed point of reference. When I vary the unknown Y, I am varying the unknown number of years ago when the last asteroid strike occurred. The time when the asteroid destroyer is built remains fixed at 150 years after the present.
Keep in mind the first error I noted in my post. Leslie starts with prior information and prior probabilities about future births, not total births. Leslie assumes that mankind will be able to colonize the galaxy 150 years—equivalently, roughly 20 billion births—from now, regardless of how many unknown births have already occurred.
What I would like to see (among other things) is …
I am new to Less Wrong, and so I’m not as familiar as you are with how things are done on this site. Manfred, I do appreciate your comments and your interest in my thesis. But I think that, at some point, scholarship demands that you turn to original sources, which in this case include my e-book. If I thought I could give a complete argument in a discussion post, I would not have written a whole book. Rather than my trying to recapitulate the book in a haphazard order and with impromptu formulations based on your particular questions, don’t you think it would be more productive for us to refer to the book? Or at the least, you could refer to my published paper (upon which the book is based), which is part of the peer-reviewed literature on the subject. The book is clearer than the paper though, and I have already offered the book to LWers such as you for free. (Only one LWer has taken me up on the offer.) The book is only 22,000 words long, and I think you would have little trouble homing in on the sections and formulations that interest you most.
I appear to be using a different meaning of “self-sampling assumption” than you. Rather than worrying about it, do we agree that when used correctly it’s just an expression of underlying statistics?
If it were used ‘correctly’, yes, SSA would just be one perspective on the prior information. But I know of no real-life applications of the Doomsday Argument in which SSA has been used ‘correctly’.
Added 8/19/2011: On second thought, I really do not know what you mean by this statement without your providing context. I think SSA is wrong, at least as it has been used in the Doomsday Argument; I don’t know what it would mean to use correctly something that is wrong. To be as charitable as possible, I could say that if the mathematical formulas implied by SSA happened to match up with the prior information in a given case (and I have never seen such a case related to DA), then SSA would just be one perspective on the prior information.
I do not know how to get nice formatting into a comment, so I will try to address your question in an addendum to my original post.
Thanks.
It looks like your reasoning is incorrect. What your equations are really saying is “you’re more likely to live to year N if you build safety systems earlier.” That is, your year Y isn’t (just) the “present moment,” it’s “the year you build an asteroid deflector.” However, you do not show that, given that our existential risk can decrease dramatically in the future, we should expect to have a long future ahead of us.
Also:
The risk isn’t what matters to reference class. The reference class does refer to some class with constant probability. But that probability is not the probability of existential risk. It is the probability that I am in some state, given some information about me. Unless that information is “I am about to die from an existential threat,” these probabilities are not the same and so the existential risk will not be constant over the reference class for you.
Is your first objection that, in my scenario, the decrease in lambda occurs in the present year, while Leslie assumes that the decrease in lambda will not occur until 150 years from now? That’s a fair issue to raise. In my book, I work through numerical examples in detail (using Poisson processes instead of binomial processes), including an example using plausible numbers based on Leslie’s own scenario, and I also identify more general mathematical formulas. But I will try to defend the basic ideas here.
Suppose I amend my scenario as follows: the asteroid destroyer will not be completed until 150 years from now.
Recall that the Doomsday Argument (DA) invokes the Self-Sampling Assumption (SSA) twice: once for a small N and once for a much larger N. Since the case of the larger N is simpler to deal with, I will consider that case first.
For the larger N, change the exponents in the last equation in my addendum from Y and N–Y to Y+150 and N–Y–150, respectively when N >= Y+150; when N < Y+150, also change each ‘r’ to a ‘q’. (Note also that today I corrected the last equation in my addendum, changing the final ‘q’ to an ‘r’) The same conclusion holds as before: given this very large N, it is very likely that we are near the beginning of the window of safety, contradicting SSA.
The case for the smaller N is more complicated, and depends on the relative values of q, r, N, and 150. In general, SSA will be much less wrong (no pun intended) for the small N. But recall that DA takes the ratio of the two applications of SSA. The fact that the application for large N is much more wrong than the application for small N makes the ratio very wrong.
I think I have shown that the risk is what determines the correct choice of reference class. But I do not claim to be the only one to have identified this fact. Gott (1994) makes a similar point. Willard Wells (Apocalypse When?: Calculating How Long the Human Race Will Survive. Berlin: Springer Praxis, 2009, p.37) makes the same point when he writes, “When bias nullifies the principle of statistical indifference, look for a different variable that spreads the risk evenly and thereby restores the principle.”
I take it you mean that if there are N members in the reference class, then the Self-Sampling Assumption asserts that the probability P that the present member has rank Y is 1/N for all Y in [1,N].
Do you think I’m claiming that the existential risk q per unit in the reference class is equal to 1/N? Of course I am not claiming that; indeed, such a claim would be meaningless, since there is an entire range of possible values for N given any particular q. But I have shown that, for a binomial or Poisson process, constant q implies constant P, and vice versa. Therefore, if you invoke the Self-Sampling Assumption for a reference class for which q is not constant, you are asserting a contradiction. Conversely, if you want to use a uniform-distribution assumption akin to the Self-Sampling Assumption, it is the assessment of the risk that should determine the choice of reference class. (I cover the issue of the reference class in much depth in my book.) The failure to understand this fact is, in my judgment, the reason that Leslie and Bostrom have not been able to solve the problem of the reference class.
No. The trouble is that your argument about “greater when Y is closer to the beginning” hinges on imagining varying the value of Y—moving it around. Currently when you move around the year Y, you are moving not just the present year but also the year when asteroid defense is built. What I would like to see (among other things) is you moving around the present year without moving around when asteroid defense is built.
I appear to be using a different meaning of “self-sampling assumption” than you. Rather than worrying about it, do we agree that when used correctly it’s just an expression of underlying statistics? Then we could just talk in terms of statistics and not have to worry about the definition.
Your comment touches on the crux of the matter.
Of course, what is moving and what is fixed depends on the point of reference. In my analysis, I take the present as the fixed point of reference. When I vary the unknown Y, I am varying the unknown number of years ago when the last asteroid strike occurred. The time when the asteroid destroyer is built remains fixed at 150 years after the present.
Keep in mind the first error I noted in my post. Leslie starts with prior information and prior probabilities about future births, not total births. Leslie assumes that mankind will be able to colonize the galaxy 150 years—equivalently, roughly 20 billion births—from now, regardless of how many unknown births have already occurred.
I am new to Less Wrong, and so I’m not as familiar as you are with how things are done on this site. Manfred, I do appreciate your comments and your interest in my thesis. But I think that, at some point, scholarship demands that you turn to original sources, which in this case include my e-book. If I thought I could give a complete argument in a discussion post, I would not have written a whole book. Rather than my trying to recapitulate the book in a haphazard order and with impromptu formulations based on your particular questions, don’t you think it would be more productive for us to refer to the book? Or at the least, you could refer to my published paper (upon which the book is based), which is part of the peer-reviewed literature on the subject. The book is clearer than the paper though, and I have already offered the book to LWers such as you for free. (Only one LWer has taken me up on the offer.) The book is only 22,000 words long, and I think you would have little trouble homing in on the sections and formulations that interest you most.
If it were used ‘correctly’, yes, SSA would just be one perspective on the prior information. But I know of no real-life applications of the Doomsday Argument in which SSA has been used ‘correctly’.
Added 8/19/2011: On second thought, I really do not know what you mean by this statement without your providing context. I think SSA is wrong, at least as it has been used in the Doomsday Argument; I don’t know what it would mean to use correctly something that is wrong. To be as charitable as possible, I could say that if the mathematical formulas implied by SSA happened to match up with the prior information in a given case (and I have never seen such a case related to DA), then SSA would just be one perspective on the prior information.