Am I misunderstanding, missing a joke, or did the overwhelming majority here consider the probability that the LHC could destroy the world non-negligible? After reading this article, I wound up looking up articles on collider safety just to make sure I wasn’t crazy. My understanding of physics told me that all the talk of LHC-related doomsday scenarios was just some sort of science fiction meme. I was under the impression that artificial black holes would take levels of energy comparable to the big bang, and a micro black hole would be pretty low risk even then. (Reading the wikipedia article further, I see that FHI was involved in the raising of concerns over the LHC, which is the closest thing to an explanation for this discussion I’ve found so far.)
I’m actually kinda concerned about this, since if the discussion on this page is taking LHC risk seriously, then either I or LW had serious problems modeling reality. This wouldn’t be in the category of “weird local culture”; cryonics involves a lot of unknowns and most LWers notice this, and UFAI actually makes much more sense as existential risk, since an unfriendly transhuman intelligence would actually be dangerous… but there were plenty of knowns that could be used to predict the LHC’s risk, and they all pointed toward the risk being infinitecimal.
If, on the other hand, this was some bit of humor playing on pop-sci memes, used to play with the anthropic principal and quantum suicide, then oops.
The question “how many LHC failures is too many?” is the question “how negligible was your prior on the LHC being dangerous, really?” Is it low enough to ignore 10 failures? 100? 1000? Do you have enough confidence in your understanding of physics to defy the data that many times?
Ok. Somehow it came across as taking the idea of LHC risk more seriously than is rational. I’m not sure why it didn’t feel hypothetical enough (I should have been tipped off when Eliezer didn’t mention the obvious part where the LHC would lose funding if the failures became too numerous. I’d consider 1000 LHC failures indicative that my model of how scientists get funding is broken before the LHC actually being a doomsday weapon.).
The idea is that the risk is infinitesimal but you want to put an approximate number on that using a method of imaginary updates—how much imaginary evidence would it take to change your mind?
That makes sense. I made a similar misinterpretation on a different post around the same time I read this one, so putting the two together makes me pretty confident I was not thinking at my best yesterday. (Either that, or my best is worse than I usually believe.)
Am I misunderstanding, missing a joke, or did the overwhelming majority here consider the probability that the LHC could destroy the world non-negligible? After reading this article, I wound up looking up articles on collider safety just to make sure I wasn’t crazy. My understanding of physics told me that all the talk of LHC-related doomsday scenarios was just some sort of science fiction meme. I was under the impression that artificial black holes would take levels of energy comparable to the big bang, and a micro black hole would be pretty low risk even then. (Reading the wikipedia article further, I see that FHI was involved in the raising of concerns over the LHC, which is the closest thing to an explanation for this discussion I’ve found so far.)
I’m actually kinda concerned about this, since if the discussion on this page is taking LHC risk seriously, then either I or LW had serious problems modeling reality. This wouldn’t be in the category of “weird local culture”; cryonics involves a lot of unknowns and most LWers notice this, and UFAI actually makes much more sense as existential risk, since an unfriendly transhuman intelligence would actually be dangerous… but there were plenty of knowns that could be used to predict the LHC’s risk, and they all pointed toward the risk being infinitecimal.
If, on the other hand, this was some bit of humor playing on pop-sci memes, used to play with the anthropic principal and quantum suicide, then oops.
The question “how many LHC failures is too many?” is the question “how negligible was your prior on the LHC being dangerous, really?” Is it low enough to ignore 10 failures? 100? 1000? Do you have enough confidence in your understanding of physics to defy the data that many times?
Ok. Somehow it came across as taking the idea of LHC risk more seriously than is rational. I’m not sure why it didn’t feel hypothetical enough (I should have been tipped off when Eliezer didn’t mention the obvious part where the LHC would lose funding if the failures became too numerous. I’d consider 1000 LHC failures indicative that my model of how scientists get funding is broken before the LHC actually being a doomsday weapon.).
Not both?
The idea is that the risk is infinitesimal but you want to put an approximate number on that using a method of imaginary updates—how much imaginary evidence would it take to change your mind?
That makes sense. I made a similar misinterpretation on a different post around the same time I read this one, so putting the two together makes me pretty confident I was not thinking at my best yesterday. (Either that, or my best is worse than I usually believe.)