An underlying assumption of the Grabby Aliens paper by Robin Hanson et al., if I understand it, is the following:
We should expect to find ourselves as a member of a uniformly-randomly-selected civilization out of all civilizations in the history of the universe.
In other words, if there’s a master list of every civilization in the universe’s past, present, and future, our prior should be that our human civilization should be uniformly-randomly selected from that list. If you accept that assumption, then you’re obligated to perform a Bayesian update towards hypotheses-about-the-universe that predict a master-list-of-all-civilizations with the property that human civilization looks like a “typical” civilization on the list. My impression is that this assumption (and corresponding Bayesian update) is the foundation upon which the whole Grabby Aliens paper is built. (Well, that plus the assumption that there are no other Bayesian updates that need to be taken into account, which I think is dubious, but let’s leave that aside.)
If that’s right, I’m confused where this assumption comes from. When I skim discussions of anthropic reasoning (e.g. in Nick Bostrom’s book, and on lesswrong, and on Joe Carlsmith’s blog, etc.), I see lots of discussion about “SIA” and “SSA” and “UDASSA” and so on. But the Grabby Aliens assumption above seems to be none of those things—in fact, it seems to require strongly rejecting all of them! (E.g., note how the Grabby Aliens assumption does not weight civilizations by their population.)
I feel like I’m missing something. I feel like there are a bunch of people who have spent a bunch of time thinking about anthropics (I’m not one of them), and who endorse some “standard” anthropic reasoning framework like SIA or SSA or UDASSA or whatever. Do all those people think that the Grabby Aliens paper is a bunch of baloney? If so, have they written about that anywhere? Or am I wrong that they’re contradictory? (Or conversely, has anyone tried to spell out in detail why the Grabby Aliens anthropic assumption above is a good assumption?)
I’ve been studying & replicating the argument in the paper [& hopefully be sharing results in the next few weeks]
The argument implicitly uses the self-sampling assumption (SSA) with reference class of observers in civilizations that are not yet grabby (and may or may not become grabby).
Their argument is similar in structure to the Doomsday argument:
If there are no grabby aliens (and longer lived planets are habitable) then there will be many civilizations that appear far in the future, making us highly atypical (in particular, ‘early’ in the distribution of arrival times).
If there are sufficiently many grabby aliens (but not too many) they set a deadline (after the current time) by when all civilizations must appear if they appear at all. This makes civilizations/observers like us/ours that appear at ~13.8Gy more typical in the reference class of all civilizations/observers that are not yet grabby.
Throughout we’re assuming the number of observers per pre-grabby civilization is roughly constant. This lets us be loose with the the civilization - observer distinction.
I don’t think the reference class is a great choice. A more natural choice would be the maximal reference class (which includes observers in grabby alien civilization) or the minimal reference class (containing only observers subjectively indistinguishable from you).
It’s best, in my judgement, to not use reference classes at all when doing anthropics. Explained more in this sequence: https://www.lesswrong.com/s/HFyami76kSs4vEHqy
Thanks!
Maybe I’m misunderstanding SSA, but wouldn’t “SSA with reference class of observers in civilizations that are not yet grabby” require that we weight by the relevant populations?
For example, if Civilization A has 10× more citizens (before becoming grabby or going extinct) than does Civilization B, wouldn’t our prior be that we’re 10× likelier to find ourselves in Civilization A than B?
Yep, you’re exactly right.
We could further condition on something like “observing that computers were invented ~X years ago” (or something similar that distinguishes observers like) such that the (eventual) population of civilizations doesn’t matter. This conditioning means we don’t have to consider that longer-lived planets will have greater populations.
If we’re allowed to “observe” that computers were invented 80 years ago, why can’t we just “observe” that the universe is 13.8 billion years old, and thus throw the whole Grabby Aliens analysis in the garbage? :-P (Sorry if that sounds snarky, it’s an honest question and I’m open-minded to there being a good answer.)
Doesn’t sound snarky at all :-)
Hanson et al. are conditioning on the observation that the universe is 13.8 billion years old. On page 18 they write
Formally (and I think spelling it out helps) with SSA with the above reference class, our likelihood ratio is the ratio of [number of observers in pre-grabby civiliations that observe Y] to [number of observers in pre-grabby civilizations] where Y is our observation that the universe is 13.8 billion years old, we are on a planet that has been habitable for ~4.5Gy and has total habitability of ~5.5Gy, we don’t observe any grabby civilizations, etc
Oh, I think I phrased my last comment poorly.
You originally wrote “We could further condition on something like “observing that computers were invented [80] years ago” … This conditioning means we don’t have to consider that longer-lived planets will have greater populations.”
I interpreted this comment as you saying “We could restrict our SSA reference class to only include observers for whom computers were invented 80 years ago”. (Is that right?)
And then I was trying to respond to that by saying “Well if we can do that, why can’t we equally well restrict our SSA reference class to only include observers for whom the universe is 13.8 billion years old? And then “humanity is early” stops being true.”
Ah, I don’t think I was very clear either.
What I wanted to say was: keep the reference class the same, but restrict the types of observers we are we saying we are contained in(the numerator in the SSA ratio) to be only those who (amongst other things) observe the invention of the computer 80 years ago.
Yep, one can do this. We might still be atypical if we think longer-lived planets are habitable (since life has more time to appear there) but could also restrict the reference class further. Eventually we end up at minimal reference class SSA
If there are no grabby aliens, then our civilization is highly atypical. But if there are grabby aliens, then we as individuals are highly atypical, living before the space expansion which controls orders of magnitude more resources, and therefore can supports orders of magnitude more sentient observers.
A possible solution would be, if the grabby aliens have to sacrifice their sentience in return for greater expansion speed. A global race to the bottom, where those who do not reduce themselves to the most efficient replicators get outcompeted by those who do. If replicators without sentience are 1% more efficient at replication than replicators with sentience, in the long run this is all that matters.
*
(Actually, this also seems to get the math wrong. Even if grabby aliens gradually lose sentience and become pure replicators, as long as they don’t lose the sentience immediately, there should still be orders of magnitude more sentient observers in the early phase of expansion than before the expansion. So our situation before the expansion remains highly atypical.)
Hi, coauthor of the Grabby Aliens paper here.
In my view, the correct way to calculate in many anthropic problems is along the lines of the well-explored case of Everett physics: by operationalising the problem in decision theoretic terms.
For the sleeping beauty problem, if one embeds the problem in a repeated series involving bets, and if each bet feeds into a single pot, you arrive at the Thirder position. There is then a consistency argument to make the single-shot problem match that.
Similarily, for the Grabby Aliens problem, consider that civilisations may lodge predictions about the distance to the nearest GC, which can be compared to other civilisations’ guesses in the intergalactic council at a later date. Or choose a repeated game in which members of the council reset themselves to the spacetime origin point of a random other GC in the council, by simulation or other method, and make bets from there. The single-shot case, i.e. humanity’s predicament, should have a matching strategy.
It is statistical prediction in this sense that I had in mind when helping with calculations+concepts for the paper.
As I’ve argued, anthropics reasoning, absent indistinguishable copies, is nothing special: “I observe X” gives the same update as “I exist and observe X”.
So, what theory best explains the fact we exist and don’t observe aliens? Apart from the various “zoo hypotheses” (there are aliens but they are hiding), the most likely theory is “evolving life is not that hard, but humans evolved unusually early”. The first half makes our existence more likely, the second explains our non-observation of aliens (again, “we don’t observe aliens” is the same as “we exist and don’t observe aliens”, which is the same as “early aliens didn’t kill or conquer humanity, and we don’t observe aliens”).
Grabby Alien works on similar logic to well-known anthropic camps such as SSA and SIA: consider what we are as an Observation Selection Effect. As you wrote, treat ourselves as random selections from a list containing everyone. The main difference is regular anthropic camps typically apply this to individual observers, while grabby alien applies it to civilizations.
Whether this reflects good anthropic reasoning is hard to answer. If one endorses regular anthropic camps then Grabby Alien’s logic is at least incomplete. It should incorporate how many observers different civilizations have. But it should be noted applying the Observation Selection Effect at the observer level is not watertight either. Maybe it should be applied to the observer-moment pair level: what I am experiencing now should be regarded as it is randomly selected from all observer-moments. Then the theory ought to be further updated reflecting the life span of all observers from different civilizations.....
I personally firmly believe the typical OSE way of anthropic reasoning is plainly wrong. What “I” am, or more preciously what the first-person perspective is, cannot be reasoned. It is a primitively axiomatic fact. I.E. “I naturally know I am this person. But there is no reason behind it, nor an explanation for why it is so. I just am.” Attempting to explain it as a random sample only leads to paradoxes. A starter of my argument can be read here.
I have. Hi! I think the reasoning is approximately correct. The caveat is that “civilization” is not an ontologically basic element in the calculation. What you should update on is your total set of observations, and then you should prefer universes where that set of observations is more likely to be instantiated. But (without reading the grabby aliens paper) it sounds to me like this approximates the update that the paper makes.
I thought that my model aligns with UDASSA, but I’ve derived it independently and I’m not sure.
For example, IIUC, Grabby Aliens is claiming:
We are a member of a uniformly-randomly-selected civilization out of all civilizations in the past, present, and future of the universe.
We are not a uniformly-randomly-selected individual out of all individuals in the past, present, and future of the universe. For example, if Civilization A contains 1020× more individuals than Civilization B, then our prior should be that we are 1020× more likely to be any particular individual in Civilization B than any particular individual in Civilization A.
We are not on a uniformly-randomly-selected civilized planet out of all civilized planets in the past, present, and future of the universe. Same idea as #2 above.
We are not a uniformly-randomly-selected individual out of all individuals in in the past, present, and future of Earth. For example, if there will eventually be 100 trillion intelligent individuals on Earth, we should not update on the fact that we are unusually early.
You agree with all four of these? The contrast between #1 vs #4 seems especially weird to me—if we’re going to update on human civilization being early with respect to all civilizations, shouldn’t we also update on me being early with respect to all intelligent Earthlings?? #4 is of course the doomsday argument, which incidentally Robin Hanson rejects, which seems inconsistent to me.
No, not quite; if this list is correct, I was wrong about what G/A claims.
You are a uniformly sampled observer-moment (according to my model). That means you should have a master list of all instances that could implement this moment and then assume you’re sampled from those. This is in fact the beginning and end of my model. To make this more manageable, you can assume your memories from the last five minutes are accurate,[1] and then draw a slightly larger box, i.e., “I’m a randomly sampled 5-minute segment”.
Applying this:
I agree with #2 because you see that you live in a civilization with 7∗1010 people, not one with 1020
Ditto #3.
Ditto #4, you’re not randomly sampled out of people who live early and late because you see that you live early. The question for doomsday is whether a universe where lots of civilizations go extinct makes it more likely to see that you’re early (plus everything else you see), and I don’t see why it would.
So the way I disagree with #1 is similar; we can see that we’re early in the history of the universe. If GA relies on ignorance on that point (right now I can’t figure out from memory if it does), I probably disagree with it. I guess I’ll come back to this when I reread the paper or at least the video.
This goes wrong iff you are a Boltzman brain or something similar, which my model is perfectly happy to treat as coherent possibility, but Boltzman brains are extremely complex, so this should not give you a lot of moments.
It’s none of the three-letter-acronyms because it actually uses our knowledge that human civilization exists, and that we have a certain distribution over physical law. I think it’s basically fine, though I think the paper makes some pitfalls in saying it “explains” certain things without showing that there are more “microstates” of the model where their “explanation” works versus where it doesn’t.
EDIT: In response to Tristan’s answer, I’d say that you can start with this distribution and recover different three-letter acronyms by ablating away different pieces of knowledge. Like Rafael says, the important thing is taking the knowledge we actually do have and thinking about different ways the rest of the universe could be.
In my view, SIA and SSA becomes the same in the infinite universe, so there is no difference.
My question was about the Grabby Aliens paper assumption. As far as I understand, that assumption its own idiosyncratic thing which is neither SIA nor SSA (e.g. because “civilizations” are weighted equally regardless of their populations).