Anthropics doesn’t explain why the Cold War stayed Cold
(Epistemic status: There are some lines of argument that I haven’t even started here, which potentially defeat the thesis advocated here. I don’t go into them because this is already too long or I can’t explain them adequately without derailing the main thesis. Similarly some continuations of chains of argument and counterargument begun here are terminated in the interest of focussing on the lower-order counterarguments. Overall this piece probably overstates my confidence in its thesis. It is quite possible this post will be torn to pieces in the comments—possibly by my own aforementioned elided considerations. That’s good too.)
I
George VI, King of the United Kingdom, had five siblings. That is, the father of current Queen Elizabeth II had as many siblings as on a typical human hand. (This paragraph is true, and is not a trick; in particular, the second sentence of this paragraph really is trying to disambiguate and help convey the fact in question and relate it to prior knowledge, rather than introduce an opening for some sleight of hand so I can laugh at you later, or whatever fear such a suspiciously simple proposition might engender.)
Let it be known.
II
Exactly one of the following stories is true:
Story One
Recently I hopped on Facebook and saw the following post:
“I notice that I am confused about why a nuclear war never occurred. Like, I think (knowing only the very little I know now) that if you had asked me, at the start of the Cold War or something, the probability that it would eventually lead to a nuclear war, I would’ve said it was moderately likely. So what’s up with that?”
The post had 14 likes. In the comments, the most-Liked explanation was:
“anthropically you are considerably more likely to live in a world where there never was a fullscale nuclear war”
That comment had 17 Likes. The second-most-liked comment that offered an explanation had 4 Likes.
Story Two
Recently I hopped on Facebook and saw the following post:
“I notice that I am confused about why George VI only had five siblings. Like, I think (knowing only the very little I know now) that if you had asked me, at his birth or something, the probability that he would eventually have had only five siblings and no more, I would’ve said it was moderately unlikely. So what’s up with that?”
The post had 14 likes. In the comments, the most-Liked explanation was:
“anthropically you are considerably more likely to live in a world where George VI never had more siblings”
That comment had 17 Likes. The second-most-liked comment that offered an explanation had 4 Likes.
~
Which of the stories is true?
III
It wasn’t a trick question; the first story was of course true, with the second false. (If you didn’t think it was a trick but still guessed otherwise, then I want you to tell me how the heck I can get discussions like the second story on my feed.)
Even if one disagrees with it as an explanation, invoking anthropics in the context of a nuclear exchange is a familiar part of the conversational landscape by now, but it most certainly is not in the case of George VI’s lack of further siblings. This smacks to me of a contextual bias; we treat nuclear exchanges as qualitatively different from George VI’s siblings, with respect to anthropics explanations, but I am suspicious of this distinction.***
The obvious defence in face of my suspicion is to point out that the qualitative distinction arises from the nuclear exchange being an ‘observer-killing’ event, whereas George VI having more siblings is not. In this case, ‘observer-killing’ would mean that the event seems incompatible with the observer’s existence (i.e. you wouldn’t exist in a world that had suffered a nuclear exchange since this would significantly affect world history), rather than that the nuclear exchange directly would have killed the would-be observer.
(At this point, I would like to remind you that George VI had five siblings; the same number as you presumably have fingers on each of your hands.)
(Also, note that the Facebook thread was talking about a nuclear exchange, not a nuclear extinction event. Indeed, I have often seen it claimed that there were never actually enough nuclear weapons in the world for extinction to even take place. I am not convinced this aside makes a difference to the justifiability of anthropic explanations, but I note it since it is at least as hard to justify anthropic explanations with a non-extinction nuclear exchange as it is with a potential extinction event.)
However, this defence does not hold when we examine the motivation for anthropic evidence in more detail. The reason a nuclear exchange seems observer-killing is that the history of a world with a nuclear exchange will not feature you being alive in 2014, for example due to economic or civilizational setback caused by the nuclear exchange. More precisely,
Probability(You exist | No nuclear exchange) > Probability(You exist | Nuclear exchange),
whereas
Probability(You exist | George VI had five (or fewer) siblings) = Probability(You exist | George VI had six or more siblings).
The idea of anthropics evidence is motivated by the epistemological principle that one must condition, in a Bayesian update, on all information (evidence) available to oneself, including one’s own existence. Anthropic evidence is unconventional; it arises from taking this epistemological principle seriously to an uncommon extent.
(Fix it in your mind that George VI had five siblings.)
However, taking this principle even further, and bearing in mind that you (now) know that George VI had five siblings, I might ask you now why George VI didn’t have more siblings.
…and since we now know that George VI didn’t have more siblings, we obtain
Probability(You exist [and know that George VI had exactly five siblings] | George VI had more than five siblings) = 0
since the ‘Evidence’ in our likelihood Probability(Evidence | Hypothesis) now includes your knowledge that George VI had exactly five siblings.
Oh look, I just made ‘George VI had five siblings’ an observer-killing event.
Or less sensationally: Insomuch as one can explain the absence of a nuclear exchange by one’s existence, you can now also explain George VI’s exact number of siblings, or any other part of one’s knowledge.
In fact, ‘anthropic effects’ or ‘survivorship bias’ is a fully general ‘explanation’ for why anything is the case rather than some contradictory fact of the matter. This is a strong form of actualism that, when presented in such terms, is generally rejected (or at least, which I would expect to be rejected by some of the people deploying anthropic explanations).
I am skeptical of counterarguments along the lines of ‘but you, the observer, completely fail to exist in the event of a nuclear exchange, whereas only some tiny part of your knowledge is lost if George VI has more siblings’. I am not sure on what grounds one would justify making a point of treating ‘being alive’ (which is a fuzzy human concept) as the relevant point of divergence, and exclude ‘being alive but slightly different’, and the whole line of argument reeks of the kind of anthropocentric epistemology that has lead to (for example) ‘Hrm, looks like the observation of a conscious entity causes wavefunction collapse.’
IV
I realize that the extreme interpretation of ‘condition on all information’ that I have invoked here looks very pedantic, and one suspects that it might be more Clever than wise. After all, even if the argument presented for anthropics evidence is refuted by my considerations, there might be other legitimate processes for harvesting anthropics evidence that do allow ‘survivorship bias’ to be an appropriate explanation for ‘why wasn’t there a nuclear exchange’.
Another angle of offence on anthropics explanations in this context gives me more confidence that anthropic explanations are inappropriate (and thereby more confidence that the particular argument I gave above holds):
Even if one insists that there is a relevant qualitative difference between a nuclear exchange and George VI’s siblings, according to some criteria, one can think of any number of similar questions that do not feel anthropics-appropriate but which fall on the same side of the criteria as a nuclear exchange.
For example, if one thinks the qualitative difference is the possibility of civilizational collapse or economic setback, then one can explain the absence of World War III without reference to history, merely with one’s existence. That might, in fact, seem legitimate, but then we have to explain why the first two World Wars took place.
Similarly if one thinks the qualitative difference is that more people existed for one’s anthropic soul to be epiphenomenally bound to in worlds without a nuclear exchange than worlds with one; again one has to explain why we are in a model where any setbacks have happened.
It seems very ‘mysterious’ that when not talking about Conventionally Anthropic-y Thingies like nuclear exchanges, we try to explain questions of history or geopolitics or fertility using the typical, direct, causal considerations of the relevant fields. But when nuclear weapons come up, we switch into Anthropic Thinking and abandon, say, geopolitical or game theoretic explanations in favour of observer selection effects.
V
Another perspective that views anthropic explanations unfavourably is a pragmatic account that begins by considering what we want when asking a question like the one posed in the Facebook post. It seems to me that the post is basically asking for one of the following:
(A) Coincidence: Evidence that the absence of a nuclear exchange was coincidental, in the sense that there were no identifiable causal factors for the nuclear exchange not happening, or at least that any such factors are not relevant (e.g. not helpful for making predictions about future calamities, not helpful for understanding international relations, etc.)
(B) Faulty model: Evidence that there are relevant factors that the original poster overlooked or weighed incorrectly that, if weighed correctly, would decrease the probability one would give at the start of the Cold War for a subsequent nuclear exchange
In case (A), we want to know so that we can confirm that there is nothing actionable to be learned from the incorrect prediction. In case (B), we want to know so that we can learn from any systematic mistakes that might crop up in our understanding of such situations. An anthropic explanation advances neither of these projects.
Imagine that you are drenched if and only if somebody turned on a sprinkler next to you. We could represent this graphically by ‘Somebody turns on sprinkler’-->’Water is sprinkled towards you’-->’You get soaked’-->’You outragedly ask why you are soaked’, with each of the successive conditional probabilities being one (certainty). It would not be a particularly useful reply to the question (“Why am I soaked?!”) for someone to say, “Well, in every model in which you’re not soaked, you don’t think to ask that question.” It is trivially true that there is perfect correlation between these two events, but this is not the causal information being sought. The observation that you ask the question if and only if you are soaked encodes a correlation, but this is not the aspect of the graph we’re interested in. In fact, this observer selection explanation arises entirely from regular, causal, useful factors downstream.
So long as you believe that your weight of experience is distributed among your instantiations according to some prior on initial conditions and causal laws governing the redistribution of weight of experience thereafter (for example, if one starts with a prior over quantum wavefunctions then allocates weight of experience within each wavefunction according to its evolution and the Born rule), then anthropic explanations are lossy compressions of causal explanations, as with the sprinkler; insomuch as any event is meaningfully explainable, there must be a causal explanation.
(This is equivalent to the deep point that the Doomsday Argument or the Great Filter only rephrase our priors in ways that seem significant, so that updating on them is a mistake of updating twice on evidence—or, more accurately, updating on one’s priors!###)
Now, there might be rules for allocating weight of experience that somehow favour anthropic perspectives. But there seems to be no reason to expect any unbiased set of rules to favour anthropic perspectives specifically, or even to support anthropics rather than penalize it, and my conjunctive probability that there exist such unbiased supporting sets of rules and that the people explaining the absence of a nuclear exchange with those unbiased rules in mind…is not very high.
VI
I’ve given several reasons to be skeptical of anthropic explanations like that quoted early in this post. Even if the considerations here are non-exhaustive or incomplete, they suggest that the matter is more involved and less clear-cut than many seem to believe.
On the other hand, possibly the pattern of agreement in the comments on the Facebook post was about showing off understanding (or even just having heard of) anthropic arguments, or rewarding Cleverness, rather than endorsement of anthropics as the One True Explanation. Maybe I’m not actually much less confident than others about anthropic explanations, and I misread the situation?
***There is at least one unmentioned line of potential redemption that I see for the instinct to treat these cases as qualitatively different, but for the sake of allocating attention to the points raised here, in the interest of not jumping a few rungs up the ladder, and because I have not explored that avenue so well, I shall pass over it. Ideally I shall explain the line in question eventually, but first I would prefer to build up the preceding rungs. If I see anyone raise it anyway, I shall publicly award them many Knave points.
###This also leaves an opening; since we are not Bayesian reasoners, it might be practical to try to construct priors after-the-fact from considerations such as observer selection effects. But then we are in murky enough territory that it is not clear why we should give much weight to observer selection effects compared to the countless other types of evidence we can learn from—or indeed that we should take selection effects into account at all. This point deserves further thought, though.
Alice notices that George VI had five siblings. She asks Bob why that is. After all, it’s so much more likely for him to have a number of siblings other than five. Bob tells her that it’s a silly question. The only reason she picked out five is that that’s how many siblings he had. If he’d had six children, she (or rather someone else, because it’s not going to be the same people) would be asking why he had six siblings. There’s no coincidence.
Alice notices that Earth survived the cold war. She asks Bob why that is. After all, so much more likely for Earth not to survive. Bob tells her that it’s a silly question. The only reason she picked out Earth is that it’s her home planet, which is because it survived the cold war. If Earth died and, say, Pandora survived, she (or rather someone else, because it’s not going to be the same people) would be asking why Pandora survived the cold war. There’s no coincidence.
Is this in support of or in opposition to the thesis of the post? Or am I being presumptuous to suppose that it is either?
Opposition.
The opposition is that the number of observers able to ask questions about royal siblings is not heavily correlated with the actual number of royal siblings historically present; while the number of observers able to ask questions about a lack of large thermonuclear exchanges is heavily correlated with the actual number of historical large thermonuclear exchanges.
I was waiting for the revelation that George VI actually had 6 siblings, and that therefore I didn’t exist, but it never came....
Haha. I did seriously consider it when that example was less central to the text, but ended up just going for playing it straight when it was interleaved, since I didn’t want to encourage second-guessing/paranoia.
Wikipedia mentions one case of anthropics (if you use the term loosely) making (or could have made) an honest testable prediction:
which in actuality was an argument from the measured abundance of carbon in the spectral lines, and the prediction that carbon was required as a nuclear reaction catalyst to make the Sun shine at such a low mass and temperature. The “we exist, therefore...” part was apparently added later, when Hoyle became a fan of anthropics. Still, if you want to steelman anthropics, that would be one way.
I have a better example:
The universe is largely composed of empty space. If you pick a random point in the universe, it would most likely be in the middle of nowhere, with not a single star visible. The probability of a given point not just being in a galaxy, not just being in a star system, not just being at a planet, but actually being at the thing surface of this planet is miniscule. And yet, here we are.
But of course we’re here. There’s nothing in space to ask the question.
Except for Boltzmann Brains.
A better analogy would be:
“I am confused that I know George VI had 5 siblings. That excludes me from possible worlds where George VI had 3 siblings, or 7 siblings, or no siblings. Each bit of evidence in my mind excludes me from half of all possible worlds. Why do I know so much?”
I don’t believe in anthropic reasoning.
More elaborately: Considering how confused it makes everyone, any arguments based on anthropic reasoning are very likely to miss taking that left turn at Albuquerque and end up in Outer Mongolia rather than arriving at true conclusions. So I don’t pay attention to it.
Of course you would say that. If you believed otherwise, you would have said something else.
True, but what does that have to do with anything? ;)
I think “don’t believe in” should be reserved for stronger forms than disbelief in the success of the typical application.
You can get anthropic-like effects without needing to vary populations (see for instance http://lesswrong.com/lw/3dy/solve_psykoshs_nonanthropic_problem/). So let’s see if we can do that with this example, and see if there really is anything different between royal siblings and nuclear war.
Start with 101 people, in dark rooms (you are one of them). These will be divided into two groups: one of 100 in the “no WW3” group, and a single person in the “WW3″ group. Then the experiment organisers get some of George IV’s parent’s DNA, and clone him a sibling if a coin comes heads. Everyone is (honestly) told the result of the coin flip.
This seems to have all the features of your example. Suppose you are told that there was no sibling clone. Then you can confidently say Probability(You exist [and know that George VI had exactly five siblings] | George VI had more than five siblings) = 0.
And yet the “anthropic” odds of being in the “WW3″ group remains 1:100. So something genuinely different is going on.
Whether you can apply the same reasoning to the anthropic cases is what is debated between SIA, SSA and my favourite ADT.
Rightly or wrongly, I don’t pay much attention to anthropics, but here’s another argument to throw into the pot to rebut the argument of part III:
Nuclear exchange (of the sort assumed) results in fewer observers around for me to be any of them. Whereas, more siblings for George VI leaves just as many observers around.
Except I don’t think this works. The answer to the question “why did X happen” should not depend on who is asking. Martian historians observing the Earth and asking “how did they avoid blowing themselves up?” are not in a position to answer the question anthropically(1), without going all the way to the absurdity of answering every question “why X?” with “otherwise, you would not be asking why X”.
(1) Or whatever the word should be. Perhaps just “anthropically”.
I disagree. Most observers will have found that their planet was not blown up, but if they look at other planets, they will find that most of them blew up. As such, it would be surprising to find another planet that didn’t blow up, but not surprising that yours did not.
The difference in surprise is due to the unlikeliness of being from Earth. Being from Earth is evidence that it did not blow itself up. The aliens don’t have this evidence, so they’re more surprised than an Earthling.
I think this is confusing priors and posteriors. Since we have not blown ourselves up, the probability that we have not blown ourselves up is 1. That does not affect the answer to the question, “how likely was it in 1950 that we would?”.
Here’s another extreme and hypothetical problem (hence of little interest to me, but others may find themselves drawn to thinking about it). A physicist deduces from currently known physics the existence of a process whereby there is a calculable probability per unit of space-time volume of a spontaneously created singularity that will spread outwards at the speed of light, instantaneously turning everything it hits to a state incapable of the complexity required to support any sort of life. The probability works out to about 1-10^(-20) per Planck volume per Planck time. Should that suggest that his conclusion is wrong?
Anthropics seems to be built around the idea of using who you are as evidence. P(humans have not blown ourselves up|I am a human) is high, so long as you accept that “I am a human” is a meaningful observation.
For the way you phrased it, “at least one human exists” would give the same answer, but imagine that not everyone would die. There being at least one human is a given.
There’s a few distinct possibilities to consider:
The physicist is a human, and his theory is wrong. (Moderate prior)
The physicist is a human, and his theory is right. (Exponentially tiny prior)
The physicist is a Boltzmann brain, and his theory is wrong. (Tiny prior)
The physicist is a Boltzmann brain, and his theory is right. (Very tiny prior)
If he’s right, he’s almost certainly a Boltzmann brain, since an actual human evolving in that universe requires far too many coincidences. But if he’s a Boltzmann brain, he has no reason to believe that theory, since it’s based on a chance hallucinated memory rather than actual experiments, and the theory is most likely wrong. And if the theory is wrong, it would be pretty surprising to hallucinate something like humanity, so he probably is a real person.
Thus, the most likely conclusion is that his theory is wrong, and he’s a human.
Yes! There’s a lot of ways to remove the original observer from the question.
The example I thought of (but ended up not including): If all one’s credence were on simula(ta)ble (possibly to arbitrary precision/accuracy even if perfect simulation were not quite possible) models and one could specify a prior over initial conditions at the start of the Cold War, then one could simulate each set of initial conditions forward then run an analysis over the sets of initial conditions to see if any actionable causal factors showed up leading to the presence or absence of a nuclear exchange.
A problem with this is that whether one would expect such a set of simulations to show a nuclear exchange to be the usual outcome or not is pretty much the same as one’s prior for a nuclear exchange in the non-simulated Cold War, by conservation of expected evidence. But maybe it suffices to at least show that the selection effect is irrelevant to the causal factors we’re interested in. Certainly it gives a way to ask such questions that has a better chance of circumventing anthropic explanations in which one might not be interested.
The trouble is, anthropic evidence works. I wish it didn’t, because I wish the nuclear arms race hadn’t come so close to killing us (and may well have killed others), and was instead prevented by some sort of hard-to-observe cooperation.
But it works. Witness the Sleeping Beauty Problem, for example. Or the Sailor’s Child, a modified Sleeping Beauty that I could go outside and play a version of right now if I wished.
The winning solution, that gives the right answer, is to use “anthropic” evidence.
If this confuses you, then I (seriously) suggest you re-examine your understanding of how to perform anthropic calculations.
In fact, what you are describing is not “anthropic” evidence, but just ordinary evidence.
I (think I) know that George VI had five siblings (because you told me so.) That observation is more likely in a world where he did have five siblings (because I guessed your line of argument pretty early in the post, so I know you have no reason to trick me.) Therefore, updating on this observation, it is probable that George VI had five siblings.
Is this an explanation? Sort of.
There might be some special reason why George VI had only five siblings—maybe his parents decided to stop after five, say.
More likely, the true “explanation” is that he just happened to have five siblings, randomly. It wasn’t unusually probable, it just happened by chance that it was that number.
And if that is the true explanation, then that is what I desire to believe.
I don’t understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn’t feel like what I’d call ‘using anthropic evidence’. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)
Can you give a concrete example of what you see as an example of where anthropic reasoning wins (or would win if we performed a simple experiment)? If anything, experiments seem like they would highlight ambiguities that naïve anthropic reasoning misses; if I try to write ‘halfer’ and ‘thirder’ computer programs for Sleeping Beauty to see which wins more, I run into the problem of defining the payoffs and thereby rederive the dissolution ata gave in the linked post.
OK, well by analogy, what’s the “payoff structure” for nuclear anthropics?
Obviously, we can’t prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world.
It isn’t perfectly analogous, but it seems to me that “be right” is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.
I’m not sure if it’s because I’m Confused, but I’m struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.
I’m saying that if Sleeping Beauty’s goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of “payoff” that gives Thirder results.
From If a tree falls on Sleeping Beauty...:
I still don’t think George VI having more siblings is an observer-killing event.
I assume you mean “know” the usual way. Not hundred percent certainty, just that I saw it on Wikipedia and now it’s a fact I’m aware of. Then P(I exist with this mind state | George VI had more than five siblings) isn’t zero, it’s some number based on my prior for Wikipedia being wrong.
So my mind state is more likely in a five-sibling world than a six-sibling one, but using it as anthropic evidence would just be double-counting whatever evidence left me with that mind state in the first place.
Yep; in which case the anthropic evidence isn’t doing any useful explanatory work, and the thesis ‘Anthropics doesn’t explain X’ holds.
Anthropics fails to explain King George because it’s double-counting the evidence. The same does not apply to any extinction event, where you have not already conditioned on “I wouldn’t exist otherwise.”
If it’s a non-extinction nuclear exchange, where population would be significantly smaller but nonzero, I’m not confident enough in my understanding of anthropics to have an opinion.
(I had guessed the George VI thing would be about the friendship paradox as applied to parenthood.)
There’s a much simpler problem with Cold War-like anthropic reasoning: it requires unjustifiable assumptions about universe-selection priors. It’s generally true that with any Bayesian reasoning you run into trouble if you have a bad prior: in the sense of the map diverging from the territory on evidence updates, and testable predictions coming up false revealing when your prior is leading you astray. But with antropic explanations we only have one data point—that we already exist—and are incapible of making testable predictions. So we are unable to know if our prior is accurate or not, and are completely unable to differentiate between any consistent anthropic explanation which permits our existence.
Anthropic reasoning is useful is one sense only: ruling out universes where our existence is an absolute impossibility. Why are physical constants set exactly right such as to allow formation of atoms? Because without chemistry we couldn’t exist. Beyond that invocation of the anthropic prinicple is useless, and it should raise red flags for any accomplished rationalist.
You can make an anthropic reasoning argument using any almost-wiped out ethnicity.
For example, Native Americans. Someone born to a Native American tribe is more likely to live in a world where Europe didn’t successfully colonize the Americas than the current timeline. It’s the same anthropic reasoning, but the problem is that it’s fallacious to rest an entire argument on that one piece of evidence.
Unless I’m missing something, this version of anthropic reasoning seems to be making this argument: Pr(E | H) = Pr(H | E).
But in our timeline, the anthropic evidence is outweighed by much stronger regular-old evidence that Europe did, in fact, successfully colonize the Americas
...and that’s why anthropics doesn’t explain why the Cold War stayed cold.
Exactly. That is the point I was trying to make.