But by “theory of the universe”, Robin Hanson meant not only the theory of how the physical universe was, but the anthropic probability theory. The main candidates are SIA and SSA. SIA is indifferent between T1 and T2. But SSA prefers T1 (after updating on the time of our evolution).
SIA is not indifferent between T1 and T2. There are way more humans in world T1 than in world T2 (since T2 requires life to be very uncommon, which would imply that humans are even more uncommon), so SIA thinks world T1 is much more likely. After all, the difference between SIA and SSA is that SIA thinks that universes with more observers are proportionally more likely; so SIA will always think aliens are more likely than SSA does.
Previously, I thought this was in conflict with the fact that humans didn’t seem to be particularly early (ie., if life is common, it’s surprising that there aren’t any aliens around 13.8 billion years into the universe’s life span). I ran the numbers, and concluded that SIA still thought that we’d be very likely to encounter aliens (though most of the linked post instead focuses on answering the decision-relevant question “how much of potentially-colonisable space would be colonised without us?”, evaluated ADT-style).
After having read Robin’s work, I now think humans probably are quite early, which would imply that (given SIA/ADT) it is highly overdetermined that aliens are common. As you say, Robin’s work also implies that SSA agrees that aliens are common. So that’s nice: no matter which of these questions we ask, we get a similar answer.
I didn’t fully define those theories, and, indeed, if they depended on commonness of life, then SAI would prefer T1.
But if I posited instead that T1 and T2 differ only in the propensity for aliens to become grabby or not, then SIA would indeed be indifferent between them.
Good point, I didn’t think about that. That’s the old SIA argument for there being a late filter.
The reason I didn’t think about it is because I use SIA-like reasoning in the first place because it pays attention to the stakes in the right way: I think I care about acting correctly in universes with more copies of me almost-proportionally more. But I also care more about universes where civilisations-like-Earth are more likely to colonise space (ie become grabby), because that means that each copy of me can have more impact. That kind-of cancels out the SIA argument for a late filter, mostly leaving me with my priors, which points toward a decent probability that any given civilisation colonises space in a grabby manner.
Also: if Earth-originiating intelligence ever becomes grabby, that’s a huge bayesian update in favor of other civilisations becoming grabby, too. So regardless of how we describe the difference between T1 and T2, SIA will definitely think that T1 is a lot more likely once we start colonising space, if we ever do that.
So regardless of how we describe the difference between T1 and T2, SIA will definitely think that T1 is a lot more likely once we start colonising space, if we ever do that.
SIA isn’t needed for that; standard probability theory will be enough (as our becoming grabby is evidence that grabbiness is easier than expected, and vice-versa).
I think there’s a confusion with SIA and reference classes and so on. If there are no other exact copies of me, then SIA is just standard Bayesian update on the fact that I exist. If theory T_i has prior probability p_i and gives a probability q_i of me existing, then SIA changes its probability to q_i*p_i (and renormalises).
Effects that increase the expected number of other humans, other observers, etc… are indirect consequences of this update. So a theory that says life in general is easy also says that me existing is easy, so gets boosted. But “Earth is special” theories also get boosted: if a theory claims life is very easy but only on Earth-like planets, then those also get boosted.
SIA isn’t needed for that; standard probability theory will be enough (as our becoming grabby is evidence that grabbiness is easier than expected, and vice-versa).
I think there’s a confusion with SIA and reference classes and so on. If there are no other exact copies of me, then SIA is just standard Bayesian update on the fact that I exist. If theory T_i has prior probability p_i and gives a probability q_i of me existing, then SIA changes its probability to q_i*p_i (and renormalises).
Yeah, I agree with all of that. In particular, SIA updating on us being alive on Earth is exactly as if we sampled a random planet from space, discovered it was Earth, and discovered it had life on it. Of course, there are also tons of planets that we’ve seen that doesn’t look like they have life on them.
But “Earth is special” theories also get boosted: if a theory claims life is very easy but only on Earth-like planets, then those also get boosted.
I sort-of agree with this, but I don’t think it matters in practice, because we update down on “Earth is unlikely” when we first observe that the planet we sampled was Earth-like.
Here’s a model: Assume that there’s a conception of “Earth-like planet” such that life-on-Earth is exactly equal evidence for life emerging on any Earth-like planet, and 0 evidence for life emerging on other planets. This is clearly a simplification, but I think it generalises. “Earth-like planet” could be any rocky planet, any rocky planet with water, any rocky planet with water that was hit by an asteroid X years into its lifespan, etc.
Now, if we sample a planet (Earth) and notice that it’s Earth-like and has life on it, we do two updates:
Noticing that Earth is an Earth-like planet should update us towards thinking that Earth-like planets are common in the universe.
Noticing that life emerged on Earth should update us towards thinking that life has a high probability of emerging on Earth-like planets.
If we don’t know anything else about the universe yet, these two updates should collectively imply an update towards life-is-common that is just as big as if we hadn’t done this decomposition, and just updated on the hypothesis “how common is life?” in the first place.
Now, lets say we start observing the rest of the universe. Lets assume this happens via sampling random planets and observing (a) whether they are/aren’t Earth-like (b) whether they do/don’t have life on them.
If we sample a non-Earth-like planet, we update towards thinking that Earth-like planets aren’t common.
If we sample an Earth-like planet without life, we update towards thinking that Earth-like planets has a lower probability of supporting life.
I haven’t done the math, but I’m pretty sure that it doesn’t matter which of these we observe. The update on “How common is life?” will be the same regardless. So the existence of “Earth is special”-hypotheses doesn’t matter for our best-guess of “How common is life?”, if we only conside the impact of observing planets with/without Earth-like features and life.
Of course, observing planets isn’t the only way we can learn about the universe. We can also do science, and reason about the likely reasons that life emerged, and how common those things ought to be.
That means that if you can come up with a strong theoretical argument (that isn’t just based on observing how many planets are Earth-like and/or had life on them, including Earth) that some feature of Earth significantly boosts the probability of life and that that feature is extremely rare in the universe at-large, then that would be a solid argument for why to expect life to be rare in the universe. However, note that you’d have to argue that it was extremely rare. If we’re assuming that grabby aliens could travel over many galaxies, then we’ve already observed evidence that grabby life is sufficiently rare to not yet have appeared in any of a very large number of planets in any of a very large number of galaxies. Your theoretical reasons to expect life to be rare would have to assert that it’s even rarer than that to impact the results.
SIA is not indifferent between T1 and T2. There are way more humans in world T1 than in world T2 (since T2 requires life to be very uncommon, which would imply that humans are even more uncommon), so SIA thinks world T1 is much more likely. After all, the difference between SIA and SSA is that SIA thinks that universes with more observers are proportionally more likely; so SIA will always think aliens are more likely than SSA does.
Previously, I thought this was in conflict with the fact that humans didn’t seem to be particularly early (ie., if life is common, it’s surprising that there aren’t any aliens around 13.8 billion years into the universe’s life span). I ran the numbers, and concluded that SIA still thought that we’d be very likely to encounter aliens (though most of the linked post instead focuses on answering the decision-relevant question “how much of potentially-colonisable space would be colonised without us?”, evaluated ADT-style).
After having read Robin’s work, I now think humans probably are quite early, which would imply that (given SIA/ADT) it is highly overdetermined that aliens are common. As you say, Robin’s work also implies that SSA agrees that aliens are common. So that’s nice: no matter which of these questions we ask, we get a similar answer.
I didn’t fully define those theories, and, indeed, if they depended on commonness of life, then SAI would prefer T1.
But if I posited instead that T1 and T2 differ only in the propensity for aliens to become grabby or not, then SIA would indeed be indifferent between them.
Good point, I didn’t think about that. That’s the old SIA argument for there being a late filter.
The reason I didn’t think about it is because I use SIA-like reasoning in the first place because it pays attention to the stakes in the right way: I think I care about acting correctly in universes with more copies of me almost-proportionally more. But I also care more about universes where civilisations-like-Earth are more likely to colonise space (ie become grabby), because that means that each copy of me can have more impact. That kind-of cancels out the SIA argument for a late filter, mostly leaving me with my priors, which points toward a decent probability that any given civilisation colonises space in a grabby manner.
Also: if Earth-originiating intelligence ever becomes grabby, that’s a huge bayesian update in favor of other civilisations becoming grabby, too. So regardless of how we describe the difference between T1 and T2, SIA will definitely think that T1 is a lot more likely once we start colonising space, if we ever do that.
SIA isn’t needed for that; standard probability theory will be enough (as our becoming grabby is evidence that grabbiness is easier than expected, and vice-versa).
I think there’s a confusion with SIA and reference classes and so on. If there are no other exact copies of me, then SIA is just standard Bayesian update on the fact that I exist. If theory T_i has prior probability p_i and gives a probability q_i of me existing, then SIA changes its probability to q_i*p_i (and renormalises).
Effects that increase the expected number of other humans, other observers, etc… are indirect consequences of this update. So a theory that says life in general is easy also says that me existing is easy, so gets boosted. But “Earth is special” theories also get boosted: if a theory claims life is very easy but only on Earth-like planets, then those also get boosted.
Yeah, I agree with all of that. In particular, SIA updating on us being alive on Earth is exactly as if we sampled a random planet from space, discovered it was Earth, and discovered it had life on it. Of course, there are also tons of planets that we’ve seen that doesn’t look like they have life on them.
I sort-of agree with this, but I don’t think it matters in practice, because we update down on “Earth is unlikely” when we first observe that the planet we sampled was Earth-like.
Here’s a model: Assume that there’s a conception of “Earth-like planet” such that life-on-Earth is exactly equal evidence for life emerging on any Earth-like planet, and 0 evidence for life emerging on other planets. This is clearly a simplification, but I think it generalises. “Earth-like planet” could be any rocky planet, any rocky planet with water, any rocky planet with water that was hit by an asteroid X years into its lifespan, etc.
Now, if we sample a planet (Earth) and notice that it’s Earth-like and has life on it, we do two updates:
Noticing that Earth is an Earth-like planet should update us towards thinking that Earth-like planets are common in the universe.
Noticing that life emerged on Earth should update us towards thinking that life has a high probability of emerging on Earth-like planets.
If we don’t know anything else about the universe yet, these two updates should collectively imply an update towards life-is-common that is just as big as if we hadn’t done this decomposition, and just updated on the hypothesis “how common is life?” in the first place.
Now, lets say we start observing the rest of the universe. Lets assume this happens via sampling random planets and observing (a) whether they are/aren’t Earth-like (b) whether they do/don’t have life on them.
If we sample a non-Earth-like planet, we update towards thinking that Earth-like planets aren’t common.
If we sample an Earth-like planet without life, we update towards thinking that Earth-like planets has a lower probability of supporting life.
I haven’t done the math, but I’m pretty sure that it doesn’t matter which of these we observe. The update on “How common is life?” will be the same regardless. So the existence of “Earth is special”-hypotheses doesn’t matter for our best-guess of “How common is life?”, if we only conside the impact of observing planets with/without Earth-like features and life.
Of course, observing planets isn’t the only way we can learn about the universe. We can also do science, and reason about the likely reasons that life emerged, and how common those things ought to be.
That means that if you can come up with a strong theoretical argument (that isn’t just based on observing how many planets are Earth-like and/or had life on them, including Earth) that some feature of Earth significantly boosts the probability of life and that that feature is extremely rare in the universe at-large, then that would be a solid argument for why to expect life to be rare in the universe. However, note that you’d have to argue that it was extremely rare. If we’re assuming that grabby aliens could travel over many galaxies, then we’ve already observed evidence that grabby life is sufficiently rare to not yet have appeared in any of a very large number of planets in any of a very large number of galaxies. Your theoretical reasons to expect life to be rare would have to assert that it’s even rarer than that to impact the results.