When we try to estimate the number of technological civilizations that evolved on main-sequence stars in our past light cone, we must not use the presence of at least one tech civ (namely, us) as evidence of the presence of another one (namely, ET) because if that first tech civ had not evolved, we would have no way to observe that outcome (because we would not exist).
If there were two universes, one very likely to evolve life and one very unlikely, and all we knew was that we existed in one, then we are much more likely to exist in the first universe. Hence our own existence is evidence about the likelihood of life evolving, and there still is a Fermi paradox.
If there were two universes, one very likely to evolve life and one very unlikely, and all we knew was that we existed in one, then we are much more likely to exist in the first universe.
Agree.
Hence our own existence is evidence about the likelihood of life evolving [in the situation in which we find ourselves].
Disagree because your hypothetical situation requires a different analysis than the situation we find ourselves in.
In your hypothetical, we have somehow managed to acquire evidence for the existence of a second universe and to acquire evidence that life is much more likely in one than in the other.
Well, let us get specific about how that might come about.
Our universe contains gamma-ray bursters that probably kill any pre-intelligence-explosion civilization within ten light-years or so of them, and our astronomers have observed the rate * density at which these bursters occur.
Consequently, we might discover that one of the two universes has a much higher rate * density of bursters than the other universe. For that discovery to be consistent with the hypothetical posed in parent, we must have discovered that fact while somehow becoming or remaining completely ignorant as to which universe we are in.
We might discover further that although we have managed to determine the rate * density of the bursters in the other universe, we cannot travel between the universes. We must suppose something like that because the hypothetical in parent requires that no civilization in one universe can spread to the other one. (We can infer that requirement from the analysis and the conclusion in parent.)
I hope that having gotten specific and fleshed out your hypothetical a little, you have become open to the possibility that your hypothetical situation is different enough from the situation in which we find ourselves for us to reach a different conclusion.
In the situation in which we find ourselves, one salient piece of evidence we have for or against ET in our past light cone is the fact that there is no obvious evidence of ET in our vicinity, e.g., here on Earth or on the Moon or something.
And again, this piece of evidence is really only evidence against ETs that would let us continue to exist if their expansion reached us, but there’s a non-negligible probability that an ET would in fact let us continue to exist because there no strong reason for us to be confident that the ET would not.
In contrast to the situation in which we find ourselves, in the hypothetical posed in parent, there is an important piece of evidence in addition to the piece I just described in just the same way that whatever evidence we used to conclude that the revolver contains either zero or one bullet is an additional important piece of evidence that when combined with the evidence of the results of 1,000,000 iterations of Russian roulette would cause a perfect Bayesian reasoner to reach a different conclusion than it would if it knew nothing of the causal mechanism that exists between {a spin of the revolver followed by a pull of the trigger} and {death or not-death}.
In your hypothetical, we have somehow managed to acquire evidence for the existence of a second universe and to acquire evidence that life is much more likely in one than in the other.
These need not be actual universes, just hypothetical universes that we have assigned a probability to.
Given most priors over possible universes, the fact we exist will bump up the probability of there being lots of life. The fact we observe no life will bump down the probability, but the first effect can’t be ignored.
Hence our own existence is evidence about the likelihood of life evolving [you write in great grandparent]
So in your view there is zero selection effect in this probability calculation?
In other words, our own existence increases your probability of there being lots of life just as much as the existence of an extraterrestrial civilization would?
In the previous sentence, please interpret “increase your probability just as much as” as “is represented by the same likelihood ratio as”.
And the existence of human civilization increases your P(lots of life) just as much as it would if you were an immortal invulnerable observer who has always existed and who would have survived any calamity that would have killed the humans or prevented the evolution of humans?
Finally, is there any probability calculation in which you would adjust the results of the calculation to account for an observational selection effect?
Would you for example take observational selection effects into account in calculating the probability that you are a Boltzmann brain?
I can get more specific with that last question if you like.
So in your view there is zero selection effect in this probability calculation?
In other words, our own existence increases your probability of there being lots of life just as much as the existence of an extraterrestrial civilization would?
Depends how independent the two are. Also, myself existing increases the probability of human-like life existing, while the alien civilization increases the probability of life similar to themselves existing. If we’re similar, the combined effects will be particularly strong for theories of convergent evolution.
The line of reasoning for immortal observers is similar.
Finally, is there any probability calculation in which you would adjust the results of the calculation to account for an observational selection effect?
I thought that was exactly what I was doing? To be technical, I was using a variant of full non-indexical conditioning (FNC), which is an unloved bastard son of the SIA (self-indication assumption).
Can I get a yes or no on my question of whether you take the existence of human civilization to be just as strong evidence for the probabilities we have been discussing as you would have taken it to be if you were a non-human observing human civilization from a position of invulnerability?
Actually, “invulnerability” is not the right word: what I mean is, “if you were a non-human whose coming into existence was never in doubt and whose ability to observe the non-appearance of human civilization was never in doubt.”
Can I get a yes or no on my question of whether you take the existence of human civilization to be just as strong evidence for the probabilities we have been discussing as you would have taken it to be if you were a non-human observing human civilization from a position of invulnerability?
If the existence of the “invulnerable non-human” (INH) is completely independent from the existence of human-like civilizations, then:
If the INH gets the information “there are human-like civilizations in your universe” then this changes his prior for “lots of human-like civilizations” much less that we get from noticing that we exist.
If the INH gets the information “there are human-like civilizations in your immediate neighbourhood”, then his prior is updated pretty similarly to ours.
Thanks for answering my question. I repeat that you and I are in disagreement about this particular application of observational selection effects, a.k.a., the anthropic principle and would probably also disagree about their application to an existential risk.
I think you should write that post because thoughtful respected participants on LW use the anthropic principle incorrectly, IMHO. The gentleman who wrote great grandparent for example is respected enough to have been invited to attend SIAI’s workshop on decision theory earlier this year. And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment. I say “probably” because the context has to do with “modal realism” and other wooly thinking that I cannot digest, but I have not been able to think of any context in which Cousin It’s “every passing day without incident should weaken your faith in the anthropic explanation” is a sound argument.
(Many less thoughtful or less respected participants here have misapplied or failed to take into account the anthropic principle, too.)
And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment.
It has been a while since I skimmed “Anthropic Shadow”, but IIRC a key point or assumption in their formula was that the more recent a risk would have occurred or not, the less likely ‘we’ are to have observed the risk occurring, because more recently = less time for observers to recover from the existential risk or fresh observers to have evolved. This suggests a weak version: the longer we exist, the fewer risks’ absence we need to appeal to an observer-based principle.
(But thinking about it, maybe the right version is the exact opposite. It’s hard to think about this sort of thing.)
I’ve read “Anthropic Shadow” a few times now. I don’t think I will write a post on it. It does a pretty good job of explaining itself, and I couldn’t think of any uses for it.
The Shadow only biases estimates when some narrow conditions are met:
your estimate has to be based strictly on your past
of a random event
the events have to be very destructive to observers like yourself
and also rare to begin with
So it basically only applies to global existential risks, and there aren’t that many of them. Nor can we apply it to interesting examples like the Singularity, because that’s not a random event but dependent on our development.
Thanks for answering my question. I repeat that you and I are in disagreement about this particular application of observational selection effects, a.k.a., the anthropic principle and would probably also disagree about their application to an existential risk.
Indeed. I, for one, do not worry about the standard doomesday argument, and such. I would argue that SIA is the only consistent anthropic principle, but that’s a long argument, and a long post to write one day.
Fortunately, the Anthropic shadow argument can be accepted whatever type of anthropic reasoning you use.
If there were two universes, one very likely to evolve life and one very unlikely, and all we knew was that we existed in one, then we are much more likely to exist in the first universe. Hence our own existence is evidence about the likelihood of life evolving, and there still is a Fermi paradox.
Agree.
Disagree because your hypothetical situation requires a different analysis than the situation we find ourselves in.
In your hypothetical, we have somehow managed to acquire evidence for the existence of a second universe and to acquire evidence that life is much more likely in one than in the other.
Well, let us get specific about how that might come about.
Our universe contains gamma-ray bursters that probably kill any pre-intelligence-explosion civilization within ten light-years or so of them, and our astronomers have observed the rate * density at which these bursters occur.
Consequently, we might discover that one of the two universes has a much higher rate * density of bursters than the other universe. For that discovery to be consistent with the hypothetical posed in parent, we must have discovered that fact while somehow becoming or remaining completely ignorant as to which universe we are in.
We might discover further that although we have managed to determine the rate * density of the bursters in the other universe, we cannot travel between the universes. We must suppose something like that because the hypothetical in parent requires that no civilization in one universe can spread to the other one. (We can infer that requirement from the analysis and the conclusion in parent.)
I hope that having gotten specific and fleshed out your hypothetical a little, you have become open to the possibility that your hypothetical situation is different enough from the situation in which we find ourselves for us to reach a different conclusion.
In the situation in which we find ourselves, one salient piece of evidence we have for or against ET in our past light cone is the fact that there is no obvious evidence of ET in our vicinity, e.g., here on Earth or on the Moon or something.
And again, this piece of evidence is really only evidence against ETs that would let us continue to exist if their expansion reached us, but there’s a non-negligible probability that an ET would in fact let us continue to exist because there no strong reason for us to be confident that the ET would not.
In contrast to the situation in which we find ourselves, in the hypothetical posed in parent, there is an important piece of evidence in addition to the piece I just described in just the same way that whatever evidence we used to conclude that the revolver contains either zero or one bullet is an additional important piece of evidence that when combined with the evidence of the results of 1,000,000 iterations of Russian roulette would cause a perfect Bayesian reasoner to reach a different conclusion than it would if it knew nothing of the causal mechanism that exists between {a spin of the revolver followed by a pull of the trigger} and {death or not-death}.
These need not be actual universes, just hypothetical universes that we have assigned a probability to.
Given most priors over possible universes, the fact we exist will bump up the probability of there being lots of life. The fact we observe no life will bump down the probability, but the first effect can’t be ignored.
So in your view there is zero selection effect in this probability calculation?
In other words, our own existence increases your probability of there being lots of life just as much as the existence of an extraterrestrial civilization would?
In the previous sentence, please interpret “increase your probability just as much as” as “is represented by the same likelihood ratio as”.
And the existence of human civilization increases your P(lots of life) just as much as it would if you were an immortal invulnerable observer who has always existed and who would have survived any calamity that would have killed the humans or prevented the evolution of humans?
Finally, is there any probability calculation in which you would adjust the results of the calculation to account for an observational selection effect?
Would you for example take observational selection effects into account in calculating the probability that you are a Boltzmann brain?
I can get more specific with that last question if you like.
Depends how independent the two are. Also, myself existing increases the probability of human-like life existing, while the alien civilization increases the probability of life similar to themselves existing. If we’re similar, the combined effects will be particularly strong for theories of convergent evolution.
The line of reasoning for immortal observers is similar.
I thought that was exactly what I was doing? To be technical, I was using a variant of full non-indexical conditioning (FNC), which is an unloved bastard son of the SIA (self-indication assumption).
Can I get a yes or no on my question of whether you take the existence of human civilization to be just as strong evidence for the probabilities we have been discussing as you would have taken it to be if you were a non-human observing human civilization from a position of invulnerability?
Actually, “invulnerability” is not the right word: what I mean is, “if you were a non-human whose coming into existence was never in doubt and whose ability to observe the non-appearance of human civilization was never in doubt.”
If the existence of the “invulnerable non-human” (INH) is completely independent from the existence of human-like civilizations, then:
If the INH gets the information “there are human-like civilizations in your universe” then this changes his prior for “lots of human-like civilizations” much less that we get from noticing that we exist.
If the INH gets the information “there are human-like civilizations in your immediate neighbourhood”, then his prior is updated pretty similarly to ours.
Thanks for answering my question. I repeat that you and I are in disagreement about this particular application of observational selection effects, a.k.a., the anthropic principle and would probably also disagree about their application to an existential risk.
I notice that last month saw the publication of a new paper, “Anthropic Shadow: Observation Selection Effects and Human Extinction Risk” by Bostrum, Sandberg and my favorite astronomy professor, Milan M Circovic.
As an aid to navigation, let me link to the ancestor to this comment at which the conversation turned to observation selection effects.
I have been meaning to write a post summarizing “Anthropic Shadow”; would anyone besides you and me be interested in it?
I think you should write that post because thoughtful respected participants on LW use the anthropic principle incorrectly, IMHO. The gentleman who wrote great grandparent for example is respected enough to have been invited to attend SIAI’s workshop on decision theory earlier this year. And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment. I say “probably” because the context has to do with “modal realism” and other wooly thinking that I cannot digest, but I have not been able to think of any context in which Cousin It’s “every passing day without incident should weaken your faith in the anthropic explanation” is a sound argument.
(Many less thoughtful or less respected participants here have misapplied or failed to take into account the anthropic principle, too.)
It has been a while since I skimmed “Anthropic Shadow”, but IIRC a key point or assumption in their formula was that the more recent a risk would have occurred or not, the less likely ‘we’ are to have observed the risk occurring, because more recently = less time for observers to recover from the existential risk or fresh observers to have evolved. This suggests a weak version: the longer we exist, the fewer risks’ absence we need to appeal to an observer-based principle.
(But thinking about it, maybe the right version is the exact opposite. It’s hard to think about this sort of thing.)
I’ve read “Anthropic Shadow” a few times now. I don’t think I will write a post on it. It does a pretty good job of explaining itself, and I couldn’t think of any uses for it.
The Shadow only biases estimates when some narrow conditions are met:
your estimate has to be based strictly on your past
of a random event
the events have to be very destructive to observers like yourself
and also rare to begin with
So it basically only applies to global existential risks, and there aren’t that many of them. Nor can we apply it to interesting examples like the Singularity, because that’s not a random event but dependent on our development.
Indeed. I, for one, do not worry about the standard doomesday argument, and such. I would argue that SIA is the only consistent anthropic principle, but that’s a long argument, and a long post to write one day.
Fortunately, the Anthropic shadow argument can be accepted whatever type of anthropic reasoning you use.