I understand you as saying that cosmic ray collisions that happen all the time are very similar to the sort of collisions at CERN, and since they don’t cause apocalypses, CERN won’t either. And that because the experiment has been tried before millions of times in the form of cosmic rays, this “CERN won’t either” isn’t on the order of “one in a million” or “one in a billion” but is so vanishingly small that it would be silly to even put a number to it.
Tell me if I understood you correctly and if I did I will try to rephrase my post and my objections to what you said so they are more understandable.
And that because the experiment has been tried before millions of times in the form of cosmic rays
Not millions of times. Not even just billions of times.
From a back of the envelope calculation they’ve been tried >10^16 times a year.
For the past 10^9 years.
That’s 10^25 times
And that’s probably several orders of magnitude low.
So yes, treating it as something with a non-zero probability of destroying the planet is silly.
Especially because every model I’ve seen that says it’d destroy the planet would also have it destroy the sun. Which has 10^4 times the surface area of the Earth, and would have correspondingly more cosmic ray collisions.
I’m guessing you weren’t aware of all the technical intricacies of this argument nor the necessity of bringing in white dwarf stars to clinch it. Now, it turns out you got lucky, because white dwarf stars do end out clinching the argument. But if there’s a facet of the argument you don’t understand, or there’s even a tiny possibility there’s a facet of the argument you don’t fully understand, you don’t go saying there’s zero probability.
Although I had considered the fact that the LHC reactions are closer to Earth-stationary, I hadn’t actually bothered to try and find out how likely multi-particle production from 10^12ev+ cosmic rays would be, and I wouldn’t even be sure how to calculate that in, in order to find out how likely ~Sol-stationary production events are, starting from very high energy cosmics.
And that this puts a strong upper bound on the chances.
If you multiplied it by the next thousand generations of humans on earth you wouldn’t get 1E-6 of a human life equivalent.
So if you can stop using huge numbers like 1E-9, please do proceed, because you do understand the numbers of calculating costs in human life equivalents better than me!
My problem with what you’ve been writing is not your calculations, but the numbers you’re using. Even if the cost were 6E12 lives, it’s still not worth actually worrying about. You’re demonstrating a comprehensive lack of actual domain knowledge—you literally don’t know the thing you’re talking about—and appear to be trying to compensate for that by leveraging what you do know. This tendency is natural, and it’s the usual first step to trying to quickly understand a new thing, but doesn’t tend to give sensible and useful results and may hurt the heads of those with even very slightly more domain knowledge. (When Kurzweil started talking about the brain and genome in terms of computer science, Myers’ response is best understood as “AAAAAAAAAAA STOP BEING STUPID DAMMIT”.)
On a related issue in the domain, here’s a writeup I liked on the thorny problem of how the hell those laymen found sitting on the bench in a court might try to deal with such issues.
You’re demonstrating a comprehensive lack of actual domain knowledge—you literally don’t know the thing you’re talking about—and appear to be trying to compensate for that by leveraging what you do know.
As far as I can tell, everything Yvain has said on this topic is correct. In particular, there is a further possible assumption under which it is not the case that cosmic ray collisions with Earth and the Sun prove LHC black holes would be safe, as you can find spelled out in section 2.2 of this paper by Giddings and Mangano. As Yvain pointed out in a different comment, to plug this hole in the argument requires doing some calculations on white dwarfs and/or neutron stars to find a different bound, which is what Giddings and Mangano spend much of the rest of the paper doing. These calculations, as far as I know, were not actually published until 2008 -- several months after the LHC was originally supposed to go online. It’s my impression that both before and after this analysis was done, most of those arguing the LHC is safe just repeated the simplified argument that had the hole in it; see e.g. Kingreaper in this thread. And while I’d put a very low probability on these calculations being wrong and a very low probability on the LHC destroying the world even if the calculations were wrong, it’s this sort of consideration and not 1 in 10^25 coincidences that ends up dominating the final probability estimate. Then there were all these comments about the LHC causing the end of the world being as unlikely as the LHC producing dragons etc—which if taken literally seem annoyingly wrong because of how the end of the world, unlike dragons, is a convergent result of any event sufficiently upsetting to the physical status quo. So while (just because of the multiple unlikely assumptions required) at any point and especially after the Giddings/Mangano analysis a reasonable observer would have had to put an extremely low probability on existential risk from LHC black holes, the episode still makes me update against trusting domain experts as much on questions that are only 90% about their domain and 10% about some other domain like how to interpret probabilities.
Because of conservation of both momentum and energy, particles coming out of the LHC are no slouch either. So although under extremely hypothetical conditions, stable black holes can exist without the sun being destroyed by cosmic rays, even then you need to add even more hypotheticals to make the LHC dangerous.
Note that their very hypothetical scenario is already discouraged by many orders of magnitude by Occam’s razor. I’m not sure what the simplest theory that doesn’t have black holes radiate but does have pair production near them is, but it’s probably really complicated. And then these guys push it even further by requiring that these black hole-like objects not destroy neutron stars either!
I certainly don’t disagree that there are a number of unlikely hypotheticals here that together are very improbable.
My impression from reading had been that, while the typical black hole that would be created by LHC would have too high momentum relative to Earth, there would be a distribution and with reasonably high probability at least one hole (per year, say) would accidentally have sufficiently low momentum relative to Earth. I can’t immediately find that calculation though.
If P(black holes lose charge | black holes don’t Hawking-radiate) is very low, then it becomes more reasonable to skip over the white dwarf part of the argument. Still, in that case, it seems like an honest summary of the argument would have to mention this point, given that it’s a whole lot less obvious than the point about different momenta. G & M seem to have thought it non-crazy enough to devote a few sections of paper to the possibility.
Even producing a black hole per year is doubtful under our current best guesses, but if one of a few extra-dimension TOEs are right (possible) we could produce them. So there’s sort of no “typical” black hole produced by the LHC.
But you’re right, you could make a low-momentum black hole with some probability if the numbers worked out. I don’t know how to calculate what the rate would be, though—it would probably involve gory details of the particular TOE. 1 per year doesn’t sound crazy, though, if they’re possible.
I don’t know if you’re on board with the Bayesian view of probability, but the way I interpret it, probability is a subjective level of confidence based on our own ignorance. In “reality”, the “probability” that the LHC will destroy the Earth is either 0 or 1 - either it ends up destroying the Earth or it doesn’t—and in fact we know it turned out to be 0. What we mean when we say “probability” is “given my level of ignorance in a subject, how much should I expect different scenarios to happen”.
So when I ask “what is your probability of the LHC destroying the world”, I’m asking “Given what you know about physics, and ignoring that both of us now know the LHC did not destroy the world, how confident should you have been that the LHC would not destroy the world”.
I’m not a particle physicist, and as far as I know neither are you. Both of us lack comprehensive domain knowledge. Both of us have only a medium-level of broad understanding of the basic concepts of particle physics, plus a high level of trust in the conclusion that professional particle physicists have given.
But I’m doing what one is supposed to do with ignorance—which is not say I’m completely totally sure of the subject I’m ignorant about to a certainty of greater than a billion to one. Unless you are hiding a Ph.D in particle physics somewhere, your ignorance is not significantly less than my own, yet you are acting as if you had knowledge beyond that of even the world’s greatest physicists, who are hesitant to attach more than a fifty million to one probability to that estimate.
This is what I meant by offering you the bet—trying to show that you were not, in fact, so good at physics that you could make billion to one probability estimates about it. And this is why I find your argument that I’m ignorant to be such a poor one. Of course I’m ignorant. We both are. But only one of us is pretending to near absolute certainty.
In “reality”, the “probability” that the LHC will destroy the Earth is either 0 or 1 - either it ends up destroying the Earth or it doesn’t—and in fact we know it turned out to be 0.
That doesn’t seem to be the case when considering quantum mechanics. If, since the LHC was run, we had counterfactually accrued evidence that a significant proportion of those Many Worlds were destroyed then it would be rather confusing to say that the probability turned out to be 0. This can mostly be avoided by being particularly precise about what we are assigning probabilities to. But once we are taking care to be precise it is clear that the thing that there was ‘ignorance’ about and the thing that we know now to be 0 are not the same thing. (ie. An omniscient being would possibly not have assigned 0 prior to the event.)
And here I was expecting you to actually run the numbers.
probability is a subjective level of confidence based on our own ignorance.
I’m not a particle physicist, but I do know quite a bit more about the actual numbers to start a calculation from than you do, because I bothered finding them out, and your citation so far appears to be someone else who didn’t bother finding them out. This is what I mean by “reasoning from ignorance” and “even very slight domain knowledge”.
You did run your numbers assuming events the LHC maximum and greater happen all the time, right?
The probability of the sun not coming up tomorrow is greater than 0, but in any practical sense I’d be a drooling lackwit to waste time calculating it.
This is what I meant by offering you the bet
I appreciate you’re offering a teachable moment about probability, but you really, really aren’t saying anything useful or sensible about the LHC, as you claimed to be.
I’m not a particle physicist, but I do know quite a bit more about the actual numbers to start a calculation from than you do, because I bothered finding them out
So far the only number introduced here has been Rees’ “one in fifty million”. You’ve consistently avoided giving a number, using only the “but there’s still a chance” thing, which in my interpretation you’re using diametrically against its intended meaning (intended meaning is that you can’t just use a binary “there is a chance” versus “there’s not a chance”, you actually have to worry about what the chance is). The only thing you’ve said that suggests any level of familiarity with the subject is mentioning the cosmic ray collisions, which were all over the newspapers during the necessary time period, and most of your comments make me think you’re not familiar with the various arguments that have been put forward that the LHC collisions are in fact different from cosmic ray collisions.
But I don’t actually think that matters. Assuming the prior for the LHC destroying the world before your cosmic-ray arguments and whatever other arguments you want to offer is non-negligible, you’re saying you’re certain to within one in (your probability of LHC destroying world/prior) that all your arguments are correct. Since you seem willing to give arbitrarily low probabilities, I’m sure we could fiddle with the numbers so that you’re saying you think there’s less than a one in a million chance there’s any flaw in your argument, or that you’re applying your argument wrong, or that you missed some good reason why LHC collisions don’t have to be different from cosmic ray collisions, or that the energy of cosmic ray collisions has been consistently overestimated relative to the energy of LHC collisions, or that you’re just having a really bad day and your brain is tired and you don’t realize that the argument doesn’t prove what you think it proves. I believe you’re very smart, and I realize the prior is low, and I realize the arguments against the LHC destroying the world are very good, but predicting a novel situation that some smart people disagree upon in a field you don’t understand to a level greater than one in a billion is just a bad idea.
The “but there’s still a chance” principle only means that you shouldn’t act as if you can keep on believing your argument even when the chance grows ridiculously low. It doesn’t mean that you should never keep a tiny portion of probability mass on “or maybe I’m missing something” to compensate for unknown unknowns.
This discussion is not getting anywhere, so I will let you have the last word and then bow out unless you want to continue by private message.
I looked back through and see that I indeed did not actually give a number. My sincere apologies for this. Not greater than 1 in 3E22.
(1E31x2E17=2E48 LHC-level collisions since the formation of the Universe, and yet everything we can see in the sky is still there. 3E22 LHC-level collisions, or ~1E6 LHC experimental lifetimes, with Earth itself in 4.5E9 years. “There is no indication that any of these previous ‘LHC experiments’ has ever had any large-scale consequences. The stars in our galaxy and others still exist, and conventional astrophysics can explain all the astrophysical black holes detected. Thus, the continued existence of the Earth and other astronomical bodies can be used to constrain or exclude speculations about possible new particles that might be produced by the LHC.” They do commit the fatal error of assuming that a negligible probability means “impossible”, so therefore the paper should of course be ignored.)
Sorry, I know I said I’d stop, and I will stop after this, but that 3E22 number is just too interesting to leave alone.
The last time humanity was almost destroyed was about 80,000 years ago, when a volcanic eruption reduced the human population below 1,000. So say events that can destroy humanity happen on average every hundred thousand years (conservative assumption, right?). That means the chance of a humanity-destroying event per year is 1⁄100,000. Say 90% of all humanity destroying events can be predicted with at least one day’s notice by eg asteroid monitoring. This leaves hard-to-detect asteroids, sudden volcanic eruptions, weird things like sudden methane release from the ocean, et cetera. So 1⁄1 million years we get an unexpected humanity destroying event. That means the “background rate” of humanity destroying events is 1⁄300 million days.
Suppose Omega told you, the day before the LHC was switched on, that tomorrow humankind would be destroyed. If 1/3E22 were your true probability, you would say “there’s still vastly less than one in a billion chance the apocalypse has anything to do with the LHC, it must just be a coincidence.” Even if you were the LHC project coordinator, you might not even bother to tell them not to switch on the project, because it wasn’t worth the effort it would take to go to the telephone.
Let’s look at it a different way. Suppose a scientist has a one in a thousand chance of having a psychotic break. Now suppose the world’s top physicist, so brilliant as to be literally infallible as long as he is sane, comes up with new calculations that say the LHC will destroy the world. Suppose you ask the world’s best psychiatrist, who also is literally never wrong, whether the physicist is insane, and she says no. If your probability is truly 1/3E22, then it is more likely that both the physicist and the psychiatrist have simultaneously gone insane than that the physicist is correct; what is more, even if you have no other evidence bearing on the sanity of either, your probability should still be less than one in a trillion that the LHC will destroy the Earth.
There was some discussion in LW a while back on how it might be a prediction of anthropic theory that if the LHC destroys the world, improbable occurences will prevent the LHC from working. Suppose the LHC is set up so well that the only thing that could stop it from running is a direct asteroid hit to Geneva, such that if turning the LHC on would destroy the world, we would observe an asteroid strike to Geneva with probability 1. Let’s say a biggish asteroid hits the Earth about once every thousand years (last one was Tunguska), and that each one affects one one-hundredth of the Earth’s surface (Tunguska was much less, but others could be bigger). That means there’s a 1⁄30 million chance of a big asteroid strike to Geneva each day. If your true probability is 1/3E22, you could try to turn the LHC on, have an asteroid strike Geneva the day before, and still have less than a one in a billion chance that the asteroid was anything other than a random asteroid.
In fact, all of these combined do not equal 3E22, so if the world’s top infallible physicist agreed the LHC would destroy the world and was certified sane by an infallible psychiatrist, AND an asteroid struck Geneva the last time you planned to turn the LHC on, AND you know the world will end the day the LHC is activated, then if your real probability was 3E22 you should now (by my calculations) think that, on balance, there’s about a one in three chance the LHC would destroy the world.
This is why I don’t like using numbers like 3E22 as probabilities.
The last time humanity was almost destroyed was about 80,000 years ago, when a volcanic eruption reduced the human population below 1,000. So say events that can destroy humanity happen on average every hundred thousand years (conservative assumption, right?).
The estimates there range from 2,000 to 20,000 individuals.
The population may not have been significantly bigger before the eruption:
Scientists from the University of Utah in Salt Lake City in the U.S. have calculated that 1.2 million years ago, at a time when our ancestors were spreading through Africa, Europe and Asia, there were probably only around 18,500 individuals capable of breeding (and no more than 26,000).
In fact, all of these combined do not equal 3E22, so if the world’s top infallible physicist agreed the LHC would destroy the world and was certified sane by an infallible psychiatrist, AND an asteroid struck Geneva the last time you planned to turn the LHC on, AND you know the world will end the day the LHC is activated, then if your real probability was 3E22 you should now (by my calculations) think that, on balance, there’s about a one in three chance the LHC would destroy the world.
This is why I don’t like using numbers like 3E22 as probabilities.
Brilliant. Is there any chance I could persuade you to present this as a top level post on the front page? This is a comment I expect to reference when related subjects come up in the future.
I haven’t yet entered this particular discussion, but it is of interest to me, so I hope you won’t mind persisting a bit longer, with a different interlocutor.
This is why I don’t like using numbers like 3E22 as probabilities.
May I ask just what your lower bound is on probability estimates?
I can’t, really, because it’s context dependent. If the question was “What is the probability that a program which selects one atom at random from all those in the universe (and is guaranteed by Omega genuinely random) picks this particular phosphorous atom on here the tip of my finger”, then my probability would be much less than 3E22.
Likewise, “destroy the Earth” is a relatively simple occurrence—it just needs a big enough burst of energy or mass or something. If it’s “What is the probability that the LHC will create a hamster in a tutu on top of Big Ben on noon at Christmas Day singing ‘Greensleeves’ while fighting a lightsaber duel with the ghost of Alexander the Great”, then my probability would again be less than 3E22 (at least before I formed this thought—I don’t know if having said it aloud makes the probability that malevolent aliens will enact it go above 1/3E22 or not).
Thanks for the clarification; that’s quite reasonable.
I’ll note, however, that your own arguments (the world’s greatest physicist certified sane by the world’s greatest psychiatrist...) still apply!
The point being that our “counterintuitiveness detector” shouldn’t get to automatically override calculated probabilities, especially in situations that intuition wasn’t designed to handle.
As for the LHC, it’s worth pointing out that potential benefits also have to be factored into the expected utility calculation, a fact which I don’t think I’ve seen mentioned in the current discussion.
Yvain: [...] “What is the probability that the LHC will create a hamster in a tutu on top of Big Ben on noon at Christmas Day singing ‘Greensleeves’ while fighting a lightsaber duel with the ghost of Alexander the Great”, then my probability would again be less than 3E22 (at least before I formed this thought—I don’t know if having said it aloud makes the probability that malevolent aliens will enact it go above 1/3E22 or not).
komponisto: Thanks for the clarification; that’s quite reasonable.
Or in a slightly different variant of the experiment, if your real probability is 1/3E22, if someone reliably told you that in a year from now you’d assign a probability of 1/3E12, you’d have to conclude it was probably because your rationality was going to break down (assuming the probability of such breakdowns isn’t too extremely low).
Starting this discussion, I gave a probability of one in a million. After reading up on the subject further, I found a physicist who said one in fifty million, and am willing to bow to his superior expertise.
Was there only a one in fifty chance my probability would change this much? This doesn’t seem right, because I knew going in that I knew very little about the subject and if you’d asked me whether I expected my probability to change by a factor of at least fifty, I would have said yes (though of course I couldn’t have predicted in which direction).
It seems to me it would be fine for David to believe with high probability that he would get new evidence that would change his probability to 3E12, as long as he believed it equally possible he’d get new evidence that would change it to 3E32
A 1/3e22 probability means you believe there’s a 1/3e22 chance of the event happening.
If you have, for example a 1/1e9 chance of finding evidence that increases that to 1/3e12, then you have a 1/1e9*1/3e12 chance of the event happening.
Which is 1/3e21.
So, in order to be consistent, you must believe that there is, at most, a 1/1e10 chance of you finding evidence that increases the probability to 1/3e12.
At which point, the probability of losing your rationality is obviously higher.
Yes. Yvain’s 1 in 50 million example, on the other hand, is fine, because the probability went down. In a more extreme example, it could have had a 50-50 chance of going down to 0 (dropping by a factor infinity) as long as there had been a 50-50 chance of it doubling. Conservation of expected evidence.
In several places I’d say you tilt your probability estimates in the most favorable direction to your argument. For example, you underestimate how much evidence the meteorite would give − 1/100th of the earth’s surface destroyed every 1000 years is far too much. There have been 0 humanity-wiping-out events so far, over 1 million-ish years, this does not work out to P=10^-5. In estimating based off of expert opinion you load the intuitive die with “the calculations say” rather than “the physicist says”; calculations are either right or wrong.
I agree that the estimate of 10^-22 is likely too low, but I have a negative reaction to how you’re arguing it.
The post you’re responding to didn’t use 3E22 as a probability. It gave 3E22 as a number of previous experiments.
Now, as the link you cited in this response shows, they’re not necessarily quite identical experiments (although some might result in identical experiments).
you should now (by my calculations) think that, on balance, there’s about a one in three chance the LHC would destroy the world.
So what was your answer to the original question of if the LHC should be switched on? Your citation to Lifeboat is saying they really think it shouldn’t. I presumed you had posted this because you agreed.
That is: you have numbers yourself now. Are those numbers strong enough for you to seriously advocate the LHC should be switched off and kept off?
If “yes”, what are you doing about it? If “no”, then I don’t understand the point of all the above.
I was kinda hoping you wouldn’t ask that. This whole thing came up because I said it was “reasonable” to worry about the LHC, and I stick to that. But the whole thing seems like a Pascal’s Mugging to me, and I don’t have a perfect answer to that class of problem.
I don’t think it should be switched off now, because its failure to destroy the world so far is even better evidence than the cosmic ray argument that it won’t destroy the world the next time it’s used. But if you’d asked before it was turned on? I guess I would agree with Aleksei Riikonen’s point in one of the other LW threads that this is really the sort of thing that could be done just as well after the Singularity.
But I also agree with Eliezer (I could have avoided this entire discussion if I’d just been able to find that post the first time I looked for it when you asked for a citation!) that in reality I wouldn’t lose sleep over it. Basically, I notice I am confused, and my only objection to you was the suggestion that reasonable people couldn’t worry about it, not that I have any great idea how to address the issue myself.
You mean, asking the whole actual real-life question at hand: whether the LHC is too risky to run.
“Is it reasonable to think X?” is only a useful question to consider in relation to X as part of the actual discussion of X. It’s not a useful sort of question in itself until it’s applied to something. Without considering the X itself, it’s a question about philosophy, not about the X. If you’re going to claim something about the LHC, I expect you to be saying something useful about the LHC itself.
Given you appear to regard application as a question you’d rather not have asked, what expected usefulness should I now assign to going through your comments on the subject in close detail, trying to understand them?
(I really am going WHAT? WHAT THE HELL WAS THE ACTUAL POINT OF ALL THAT, THAT WAS WORTH BOTHERING WITH? If you’re going to claim something about the LHC, I expect you to be saying something useful about the LHC itself.)
Yvain was, I suspect, trying to illustrate failures in your thought, rather than in your conclusion.
If you see someone arguing that dogs are mammals because they have tongues, you may choose to correct them, despite agreeing with their conclusion. Especially if you’re on a board related to rationality.
You don’t think an argument that something which you thought was certain is actually confusing is valuable? If an agnostic convinced a fundamentalist that God’s existence was less cut-and-dried obvious than the fundamentalist had always thought, but admitted ey wasn’t really sure about the God question emself, wouldn’t that still be a useful service?
This reads to me as an admission that you were not, nor were you intending to, at any point say anything useful or interesting about the LHC. This suggests that if you want people not to feel like you’re wasting their time and leading them on a merry dance rather than talking about the apparent topic of discussion (which is how I feel now—well and properly trolled. Well done.) then you may want to pick examples where you don’t have to hope no-one ever asks “so what is the point of all this bloviating?”
You asked for a citation for my mention that worrying about the LHC was “reasonable”. I interpreted “reasonable” to mean “there are good arguments for not turning it on”. I am not sure whether I fully believe those arguments and I am confused about how to deal with them, but I do believe there are good arguments and I presented them to you because you asked for them. I didn’t enjoy spending a few hours defending an assertion I made that was tangential to my main point either.
Aside from the whole “ability to think critically about probabilities of existential risk will probably determine the fate of humankind and all other sapient species” thing, no, it doesn’t have any practical implications. But this is a thread about philosophy on a philosophy site, and you asked a philosophical question to a former philosophy student, so I don’t think it’s fair to expect me to anticipate that you wanted to avoid discussions that were purely philosophical.
Seriously, and minus the snark, it’s possible I don’t understand your objections. I promise I was not trying to troll you and I’m sorry if you feel like this has wasted your time.
I have in fact completely given up on giving probability estimates more extreme than +/-40 decibels, or 50-60 in some extreme (and borderline trivial) cases. I haven’t actually adjusted planning to compensate for the possible loss of fundamental assumptions, though, so I may be doing it wrong… On the gripping hand, however, most of the probability mass in the remaining options tends to be impossible to plan for anyway.
I understand you as saying that cosmic ray collisions that happen all the time are very similar to the sort of collisions at CERN, and since they don’t cause apocalypses, CERN won’t either. And that because the experiment has been tried before millions of times in the form of cosmic rays, this “CERN won’t either” isn’t on the order of “one in a million” or “one in a billion” but is so vanishingly small that it would be silly to even put a number to it.
Tell me if I understood you correctly and if I did I will try to rephrase my post and my objections to what you said so they are more understandable.
Not millions of times. Not even just billions of times.
From a back of the envelope calculation they’ve been tried >10^16 times a year.
For the past 10^9 years.
That’s 10^25 times
And that’s probably several orders of magnitude low.
So yes, treating it as something with a non-zero probability of destroying the planet is silly.
Especially because every model I’ve seen that says it’d destroy the planet would also have it destroy the sun. Which has 10^4 times the surface area of the Earth, and would have correspondingly more cosmic ray collisions.
Read page 848 of http://arxiv.org/ftp/arxiv/papers/0912/0912.5480.pdf
I’m guessing you weren’t aware of all the technical intricacies of this argument nor the necessity of bringing in white dwarf stars to clinch it. Now, it turns out you got lucky, because white dwarf stars do end out clinching the argument. But if there’s a facet of the argument you don’t understand, or there’s even a tiny possibility there’s a facet of the argument you don’t fully understand, you don’t go saying there’s zero probability.
Voted up, because you raise a good point.
Although I had considered the fact that the LHC reactions are closer to Earth-stationary, I hadn’t actually bothered to try and find out how likely multi-particle production from 10^12ev+ cosmic rays would be, and I wouldn’t even be sure how to calculate that in, in order to find out how likely ~Sol-stationary production events are, starting from very high energy cosmics.
And that this puts a strong upper bound on the chances.
If you multiplied it by the next thousand generations of humans on earth you wouldn’t get 1E-6 of a human life equivalent.
So if you can stop using huge numbers like 1E-9, please do proceed, because you do understand the numbers of calculating costs in human life equivalents better than me!
My problem with what you’ve been writing is not your calculations, but the numbers you’re using. Even if the cost were 6E12 lives, it’s still not worth actually worrying about. You’re demonstrating a comprehensive lack of actual domain knowledge—you literally don’t know the thing you’re talking about—and appear to be trying to compensate for that by leveraging what you do know. This tendency is natural, and it’s the usual first step to trying to quickly understand a new thing, but doesn’t tend to give sensible and useful results and may hurt the heads of those with even very slightly more domain knowledge. (When Kurzweil started talking about the brain and genome in terms of computer science, Myers’ response is best understood as “AAAAAAAAAAA STOP BEING STUPID DAMMIT”.)
On a related issue in the domain, here’s a writeup I liked on the thorny problem of how the hell those laymen found sitting on the bench in a court might try to deal with such issues.
As far as I can tell, everything Yvain has said on this topic is correct. In particular, there is a further possible assumption under which it is not the case that cosmic ray collisions with Earth and the Sun prove LHC black holes would be safe, as you can find spelled out in section 2.2 of this paper by Giddings and Mangano. As Yvain pointed out in a different comment, to plug this hole in the argument requires doing some calculations on white dwarfs and/or neutron stars to find a different bound, which is what Giddings and Mangano spend much of the rest of the paper doing. These calculations, as far as I know, were not actually published until 2008 -- several months after the LHC was originally supposed to go online. It’s my impression that both before and after this analysis was done, most of those arguing the LHC is safe just repeated the simplified argument that had the hole in it; see e.g. Kingreaper in this thread. And while I’d put a very low probability on these calculations being wrong and a very low probability on the LHC destroying the world even if the calculations were wrong, it’s this sort of consideration and not 1 in 10^25 coincidences that ends up dominating the final probability estimate. Then there were all these comments about the LHC causing the end of the world being as unlikely as the LHC producing dragons etc—which if taken literally seem annoyingly wrong because of how the end of the world, unlike dragons, is a convergent result of any event sufficiently upsetting to the physical status quo. So while (just because of the multiple unlikely assumptions required) at any point and especially after the Giddings/Mangano analysis a reasonable observer would have had to put an extremely low probability on existential risk from LHC black holes, the episode still makes me update against trusting domain experts as much on questions that are only 90% about their domain and 10% about some other domain like how to interpret probabilities.
Because of conservation of both momentum and energy, particles coming out of the LHC are no slouch either. So although under extremely hypothetical conditions, stable black holes can exist without the sun being destroyed by cosmic rays, even then you need to add even more hypotheticals to make the LHC dangerous.
Note that their very hypothetical scenario is already discouraged by many orders of magnitude by Occam’s razor. I’m not sure what the simplest theory that doesn’t have black holes radiate but does have pair production near them is, but it’s probably really complicated. And then these guys push it even further by requiring that these black hole-like objects not destroy neutron stars either!
I certainly don’t disagree that there are a number of unlikely hypotheticals here that together are very improbable.
My impression from reading had been that, while the typical black hole that would be created by LHC would have too high momentum relative to Earth, there would be a distribution and with reasonably high probability at least one hole (per year, say) would accidentally have sufficiently low momentum relative to Earth. I can’t immediately find that calculation though.
If P(black holes lose charge | black holes don’t Hawking-radiate) is very low, then it becomes more reasonable to skip over the white dwarf part of the argument. Still, in that case, it seems like an honest summary of the argument would have to mention this point, given that it’s a whole lot less obvious than the point about different momenta. G & M seem to have thought it non-crazy enough to devote a few sections of paper to the possibility.
Even producing a black hole per year is doubtful under our current best guesses, but if one of a few extra-dimension TOEs are right (possible) we could produce them. So there’s sort of no “typical” black hole produced by the LHC.
But you’re right, you could make a low-momentum black hole with some probability if the numbers worked out. I don’t know how to calculate what the rate would be, though—it would probably involve gory details of the particular TOE. 1 per year doesn’t sound crazy, though, if they’re possible.
I don’t know if you’re on board with the Bayesian view of probability, but the way I interpret it, probability is a subjective level of confidence based on our own ignorance. In “reality”, the “probability” that the LHC will destroy the Earth is either 0 or 1 - either it ends up destroying the Earth or it doesn’t—and in fact we know it turned out to be 0. What we mean when we say “probability” is “given my level of ignorance in a subject, how much should I expect different scenarios to happen”.
So when I ask “what is your probability of the LHC destroying the world”, I’m asking “Given what you know about physics, and ignoring that both of us now know the LHC did not destroy the world, how confident should you have been that the LHC would not destroy the world”.
I’m not a particle physicist, and as far as I know neither are you. Both of us lack comprehensive domain knowledge. Both of us have only a medium-level of broad understanding of the basic concepts of particle physics, plus a high level of trust in the conclusion that professional particle physicists have given.
But I’m doing what one is supposed to do with ignorance—which is not say I’m completely totally sure of the subject I’m ignorant about to a certainty of greater than a billion to one. Unless you are hiding a Ph.D in particle physics somewhere, your ignorance is not significantly less than my own, yet you are acting as if you had knowledge beyond that of even the world’s greatest physicists, who are hesitant to attach more than a fifty million to one probability to that estimate.
This is what I meant by offering you the bet—trying to show that you were not, in fact, so good at physics that you could make billion to one probability estimates about it. And this is why I find your argument that I’m ignorant to be such a poor one. Of course I’m ignorant. We both are. But only one of us is pretending to near absolute certainty.
That doesn’t seem to be the case when considering quantum mechanics. If, since the LHC was run, we had counterfactually accrued evidence that a significant proportion of those Many Worlds were destroyed then it would be rather confusing to say that the probability turned out to be 0. This can mostly be avoided by being particularly precise about what we are assigning probabilities to. But once we are taking care to be precise it is clear that the thing that there was ‘ignorance’ about and the thing that we know now to be 0 are not the same thing. (ie. An omniscient being would possibly not have assigned 0 prior to the event.)
And here I was expecting you to actually run the numbers.
I’m not a particle physicist, but I do know quite a bit more about the actual numbers to start a calculation from than you do, because I bothered finding them out, and your citation so far appears to be someone else who didn’t bother finding them out. This is what I mean by “reasoning from ignorance” and “even very slight domain knowledge”.
You did run your numbers assuming events the LHC maximum and greater happen all the time, right?
The probability of the sun not coming up tomorrow is greater than 0, but in any practical sense I’d be a drooling lackwit to waste time calculating it.
I appreciate you’re offering a teachable moment about probability, but you really, really aren’t saying anything useful or sensible about the LHC, as you claimed to be.
So far the only number introduced here has been Rees’ “one in fifty million”. You’ve consistently avoided giving a number, using only the “but there’s still a chance” thing, which in my interpretation you’re using diametrically against its intended meaning (intended meaning is that you can’t just use a binary “there is a chance” versus “there’s not a chance”, you actually have to worry about what the chance is). The only thing you’ve said that suggests any level of familiarity with the subject is mentioning the cosmic ray collisions, which were all over the newspapers during the necessary time period, and most of your comments make me think you’re not familiar with the various arguments that have been put forward that the LHC collisions are in fact different from cosmic ray collisions.
But I don’t actually think that matters. Assuming the prior for the LHC destroying the world before your cosmic-ray arguments and whatever other arguments you want to offer is non-negligible, you’re saying you’re certain to within one in (your probability of LHC destroying world/prior) that all your arguments are correct. Since you seem willing to give arbitrarily low probabilities, I’m sure we could fiddle with the numbers so that you’re saying you think there’s less than a one in a million chance there’s any flaw in your argument, or that you’re applying your argument wrong, or that you missed some good reason why LHC collisions don’t have to be different from cosmic ray collisions, or that the energy of cosmic ray collisions has been consistently overestimated relative to the energy of LHC collisions, or that you’re just having a really bad day and your brain is tired and you don’t realize that the argument doesn’t prove what you think it proves. I believe you’re very smart, and I realize the prior is low, and I realize the arguments against the LHC destroying the world are very good, but predicting a novel situation that some smart people disagree upon in a field you don’t understand to a level greater than one in a billion is just a bad idea.
The “but there’s still a chance” principle only means that you shouldn’t act as if you can keep on believing your argument even when the chance grows ridiculously low. It doesn’t mean that you should never keep a tiny portion of probability mass on “or maybe I’m missing something” to compensate for unknown unknowns.
This discussion is not getting anywhere, so I will let you have the last word and then bow out unless you want to continue by private message.
LHC can be safe despite the argument for its safety being flawed.
I looked back through and see that I indeed did not actually give a number. My sincere apologies for this. Not greater than 1 in 3E22.
(1E31x2E17=2E48 LHC-level collisions since the formation of the Universe, and yet everything we can see in the sky is still there. 3E22 LHC-level collisions, or ~1E6 LHC experimental lifetimes, with Earth itself in 4.5E9 years. “There is no indication that any of these previous ‘LHC experiments’ has ever had any large-scale consequences. The stars in our galaxy and others still exist, and conventional astrophysics can explain all the astrophysical black holes detected. Thus, the continued existence of the Earth and other astronomical bodies can be used to constrain or exclude speculations about possible new particles that might be produced by the LHC.” They do commit the fatal error of assuming that a negligible probability means “impossible”, so therefore the paper should of course be ignored.)
Sorry, I know I said I’d stop, and I will stop after this, but that 3E22 number is just too interesting to leave alone.
The last time humanity was almost destroyed was about 80,000 years ago, when a volcanic eruption reduced the human population below 1,000. So say events that can destroy humanity happen on average every hundred thousand years (conservative assumption, right?). That means the chance of a humanity-destroying event per year is 1⁄100,000. Say 90% of all humanity destroying events can be predicted with at least one day’s notice by eg asteroid monitoring. This leaves hard-to-detect asteroids, sudden volcanic eruptions, weird things like sudden methane release from the ocean, et cetera. So 1⁄1 million years we get an unexpected humanity destroying event. That means the “background rate” of humanity destroying events is 1⁄300 million days.
Suppose Omega told you, the day before the LHC was switched on, that tomorrow humankind would be destroyed. If 1/3E22 were your true probability, you would say “there’s still vastly less than one in a billion chance the apocalypse has anything to do with the LHC, it must just be a coincidence.” Even if you were the LHC project coordinator, you might not even bother to tell them not to switch on the project, because it wasn’t worth the effort it would take to go to the telephone.
Let’s look at it a different way. Suppose a scientist has a one in a thousand chance of having a psychotic break. Now suppose the world’s top physicist, so brilliant as to be literally infallible as long as he is sane, comes up with new calculations that say the LHC will destroy the world. Suppose you ask the world’s best psychiatrist, who also is literally never wrong, whether the physicist is insane, and she says no. If your probability is truly 1/3E22, then it is more likely that both the physicist and the psychiatrist have simultaneously gone insane than that the physicist is correct; what is more, even if you have no other evidence bearing on the sanity of either, your probability should still be less than one in a trillion that the LHC will destroy the Earth.
There was some discussion in LW a while back on how it might be a prediction of anthropic theory that if the LHC destroys the world, improbable occurences will prevent the LHC from working. Suppose the LHC is set up so well that the only thing that could stop it from running is a direct asteroid hit to Geneva, such that if turning the LHC on would destroy the world, we would observe an asteroid strike to Geneva with probability 1. Let’s say a biggish asteroid hits the Earth about once every thousand years (last one was Tunguska), and that each one affects one one-hundredth of the Earth’s surface (Tunguska was much less, but others could be bigger). That means there’s a 1⁄30 million chance of a big asteroid strike to Geneva each day. If your true probability is 1/3E22, you could try to turn the LHC on, have an asteroid strike Geneva the day before, and still have less than a one in a billion chance that the asteroid was anything other than a random asteroid.
In fact, all of these combined do not equal 3E22, so if the world’s top infallible physicist agreed the LHC would destroy the world and was certified sane by an infallible psychiatrist, AND an asteroid struck Geneva the last time you planned to turn the LHC on, AND you know the world will end the day the LHC is activated, then if your real probability was 3E22 you should now (by my calculations) think that, on balance, there’s about a one in three chance the LHC would destroy the world.
This is why I don’t like using numbers like 3E22 as probabilities.
This seems in conflict with http://en.wikipedia.org/wiki/Toba_catastrophe_theory
The estimates there range from 2,000 to 20,000 individuals.
The population may not have been significantly bigger before the eruption:
http://www.physorg.com/news183278038.html
A volcanic eruption is obviously much less likely to threaten humanity’s existence today than when there were only a handful of us in the first place.
Brilliant. Is there any chance I could persuade you to present this as a top level post on the front page? This is a comment I expect to reference when related subjects come up in the future.
I haven’t yet entered this particular discussion, but it is of interest to me, so I hope you won’t mind persisting a bit longer, with a different interlocutor.
May I ask just what your lower bound is on probability estimates?
I can’t, really, because it’s context dependent. If the question was “What is the probability that a program which selects one atom at random from all those in the universe (and is guaranteed by Omega genuinely random) picks this particular phosphorous atom on here the tip of my finger”, then my probability would be much less than 3E22.
Likewise, “destroy the Earth” is a relatively simple occurrence—it just needs a big enough burst of energy or mass or something. If it’s “What is the probability that the LHC will create a hamster in a tutu on top of Big Ben on noon at Christmas Day singing ‘Greensleeves’ while fighting a lightsaber duel with the ghost of Alexander the Great”, then my probability would again be less than 3E22 (at least before I formed this thought—I don’t know if having said it aloud makes the probability that malevolent aliens will enact it go above 1/3E22 or not).
Thanks for the clarification; that’s quite reasonable.
I’ll note, however, that your own arguments (the world’s greatest physicist certified sane by the world’s greatest psychiatrist...) still apply!
The point being that our “counterintuitiveness detector” shouldn’t get to automatically override calculated probabilities, especially in situations that intuition wasn’t designed to handle.
As for the LHC, it’s worth pointing out that potential benefits also have to be factored into the expected utility calculation, a fact which I don’t think I’ve seen mentioned in the current discussion.
Yvain: [...] “What is the probability that the LHC will create a hamster in a tutu on top of Big Ben on noon at Christmas Day singing ‘Greensleeves’ while fighting a lightsaber duel with the ghost of Alexander the Great”, then my probability would again be less than 3E22 (at least before I formed this thought—I don’t know if having said it aloud makes the probability that malevolent aliens will enact it go above 1/3E22 or not).
komponisto: Thanks for the clarification; that’s quite reasonable.
^Awesome :-)
Or in a slightly different variant of the experiment, if your real probability is 1/3E22, if someone reliably told you that in a year from now you’d assign a probability of 1/3E12, you’d have to conclude it was probably because your rationality was going to break down (assuming the probability of such breakdowns isn’t too extremely low).
Okay, now I’m confused, or misunderstanding you.
Starting this discussion, I gave a probability of one in a million. After reading up on the subject further, I found a physicist who said one in fifty million, and am willing to bow to his superior expertise.
Was there only a one in fifty chance my probability would change this much? This doesn’t seem right, because I knew going in that I knew very little about the subject and if you’d asked me whether I expected my probability to change by a factor of at least fifty, I would have said yes (though of course I couldn’t have predicted in which direction).
It seems to me it would be fine for David to believe with high probability that he would get new evidence that would change his probability to 3E12, as long as he believed it equally possible he’d get new evidence that would change it to 3E32
A 1/3e22 probability means you believe there’s a 1/3e22 chance of the event happening.
If you have, for example a 1/1e9 chance of finding evidence that increases that to 1/3e12, then you have a 1/1e9*1/3e12 chance of the event happening.
Which is 1/3e21.
So, in order to be consistent, you must believe that there is, at most, a 1/1e10 chance of you finding evidence that increases the probability to 1/3e12.
At which point, the probability of losing your rationality is obviously higher.
Yes. Yvain’s 1 in 50 million example, on the other hand, is fine, because the probability went down. In a more extreme example, it could have had a 50-50 chance of going down to 0 (dropping by a factor infinity) as long as there had been a 50-50 chance of it doubling. Conservation of expected evidence.
On the one hand, everything you say would be true, if we were assigning consistent probabilities.
On the other hand, I’ve never been able to assign consistent probabilities over the LHC and knowing this hasn’t helped me either.
In several places I’d say you tilt your probability estimates in the most favorable direction to your argument. For example, you underestimate how much evidence the meteorite would give − 1/100th of the earth’s surface destroyed every 1000 years is far too much. There have been 0 humanity-wiping-out events so far, over 1 million-ish years, this does not work out to P=10^-5. In estimating based off of expert opinion you load the intuitive die with “the calculations say” rather than “the physicist says”; calculations are either right or wrong.
I agree that the estimate of 10^-22 is likely too low, but I have a negative reaction to how you’re arguing it.
The post you’re responding to didn’t use 3E22 as a probability. It gave 3E22 as a number of previous experiments.
Now, as the link you cited in this response shows, they’re not necessarily quite identical experiments (although some might result in identical experiments).
But you’re attacking an error which was not made.
“1 in 3E22” was surely a probability. Yvain made a typo at the end of his comment.
Ah, I was mistaken. For some reason I didn’t notice the link.
Are there any alternative colour-schemes for this site? Links seem to show up poorly on blue-backgrounded posts.
I would like you to consider turning your comment into a top-level post. Thanks.
So what was your answer to the original question of if the LHC should be switched on? Your citation to Lifeboat is saying they really think it shouldn’t. I presumed you had posted this because you agreed.
That is: you have numbers yourself now. Are those numbers strong enough for you to seriously advocate the LHC should be switched off and kept off?
If “yes”, what are you doing about it? If “no”, then I don’t understand the point of all the above.
I was kinda hoping you wouldn’t ask that. This whole thing came up because I said it was “reasonable” to worry about the LHC, and I stick to that. But the whole thing seems like a Pascal’s Mugging to me, and I don’t have a perfect answer to that class of problem.
I don’t think it should be switched off now, because its failure to destroy the world so far is even better evidence than the cosmic ray argument that it won’t destroy the world the next time it’s used. But if you’d asked before it was turned on? I guess I would agree with Aleksei Riikonen’s point in one of the other LW threads that this is really the sort of thing that could be done just as well after the Singularity.
But I also agree with Eliezer (I could have avoided this entire discussion if I’d just been able to find that post the first time I looked for it when you asked for a citation!) that in reality I wouldn’t lose sleep over it. Basically, I notice I am confused, and my only objection to you was the suggestion that reasonable people couldn’t worry about it, not that I have any great idea how to address the issue myself.
You mean, asking the whole actual real-life question at hand: whether the LHC is too risky to run.
“Is it reasonable to think X?” is only a useful question to consider in relation to X as part of the actual discussion of X. It’s not a useful sort of question in itself until it’s applied to something. Without considering the X itself, it’s a question about philosophy, not about the X. If you’re going to claim something about the LHC, I expect you to be saying something useful about the LHC itself.
Given you appear to regard application as a question you’d rather not have asked, what expected usefulness should I now assign to going through your comments on the subject in close detail, trying to understand them?
(I really am going WHAT? WHAT THE HELL WAS THE ACTUAL POINT OF ALL THAT, THAT WAS WORTH BOTHERING WITH? If you’re going to claim something about the LHC, I expect you to be saying something useful about the LHC itself.)
Yvain was, I suspect, trying to illustrate failures in your thought, rather than in your conclusion.
If you see someone arguing that dogs are mammals because they have tongues, you may choose to correct them, despite agreeing with their conclusion. Especially if you’re on a board related to rationality.
You don’t think an argument that something which you thought was certain is actually confusing is valuable? If an agnostic convinced a fundamentalist that God’s existence was less cut-and-dried obvious than the fundamentalist had always thought, but admitted ey wasn’t really sure about the God question emself, wouldn’t that still be a useful service?
This reads to me as an admission that you were not, nor were you intending to, at any point say anything useful or interesting about the LHC. This suggests that if you want people not to feel like you’re wasting their time and leading them on a merry dance rather than talking about the apparent topic of discussion (which is how I feel now—well and properly trolled. Well done.) then you may want to pick examples where you don’t have to hope no-one ever asks “so what is the point of all this bloviating?”
You asked for a citation for my mention that worrying about the LHC was “reasonable”. I interpreted “reasonable” to mean “there are good arguments for not turning it on”. I am not sure whether I fully believe those arguments and I am confused about how to deal with them, but I do believe there are good arguments and I presented them to you because you asked for them. I didn’t enjoy spending a few hours defending an assertion I made that was tangential to my main point either.
Aside from the whole “ability to think critically about probabilities of existential risk will probably determine the fate of humankind and all other sapient species” thing, no, it doesn’t have any practical implications. But this is a thread about philosophy on a philosophy site, and you asked a philosophical question to a former philosophy student, so I don’t think it’s fair to expect me to anticipate that you wanted to avoid discussions that were purely philosophical.
Seriously, and minus the snark, it’s possible I don’t understand your objections. I promise I was not trying to troll you and I’m sorry if you feel like this has wasted your time.
1/3E22 seems hugely overconfident.
Voted up for excellent points all around.
I have in fact completely given up on giving probability estimates more extreme than +/-40 decibels, or 50-60 in some extreme (and borderline trivial) cases. I haven’t actually adjusted planning to compensate for the possible loss of fundamental assumptions, though, so I may be doing it wrong… On the gripping hand, however, most of the probability mass in the remaining options tends to be impossible to plan for anyway.