I nominate this for one of the weakest posts ever, and not because the LHC has been operating normally for some time now (if not at full luminosity). It’s weak because it privileges a hypothesis, specifically, the Everettian reasons over many more likely ones for a sequence of failures this complex (and easy to sabotage) machinery might have suffered.
First of all, this has nothing to do with Everett interpretation, and failures of LHC are evidence of its successful start causing end of the world in the same sense as a coin toss resulting in “heads” is evidence that “tails” would kill you. (If you toss a coin a million times while thoroughly investigating and preventing any cause of significant bias, and it always comes up “heads”, this starts looking like a compelling argument to stop tossing the coin; maybe “tails” triggers a gun.)
Privileging of a hypothesis means assigning it a probability that is too high. The post was actually responding to people who were privileging that hypothesis after just one failure, and it considered the quantitative nature of such probability judgments: is there a number of failures such that it constitutes sufficient evidence for this hypothesis to become plausible? How many failures are too many? At the point where you do have enough evidence, the hypothesis is no longer unfairly privileged, it’s pushed up by the strength of evidence, distinguished from alternative explanations.
The anthropic effect can be distinguished from too-complicated-machinery or sabotage reasons when people have worked sufficiently on resolving the technical and securilty difficulties. Suppose people were trying to make LHC and similar machines work for 1000 years and never succeeded, all the while having a very clear theoretical understanding of how it works, and maybe succeeding in running certain experiments, but with the machinery always failing if they decided to run certain other kind of experiments? This would be the kind of miracle where “complex machinery” no longer works as a feasible explanation, while anthropic principle seems to fit.
We didn’t observe an impossible number of LHC failures, so the hypothesis didn’t become more probable, but the general idea (which has nothing to do with LHC) is interesting.
First of all, this has nothing to do with Everett interpretation
If you toss a coin a million times while thoroughly investigating and preventing any cause of significant bias, and it always comes up “heads”, this starts looking like a compelling argument to stop tossing the coin; maybe “tails” triggers a gun.
First, how do you reconcile your second statement with the first one? I must be missing something. Second, if anthropics save us from ourselves via quantum immortality, that’s a good reason to be less careful, not stop tossing the coin.
Suppose people were trying to make LHC and similar machines work for 1000 years and never succeeded
I’m sure there are plenty of examples of problems which ended up being much harder than they appeared, but were eventually solved (Fermat’s last theorem? Human flight?) or will be eventually solved (fusion energy? Machine vision? You name some), all are arguments for the anthropic principle… until they are no longer. They all have specific reasons for failures, unrelated to anthropics. I would keep looking for those reasons and ignore the athropics altogether as an unproductive hypothesis.
how do you reconcile your second statement with the first one?
What Everett interpretation gives you is some sense of “actuality” of hypotheticals, but when thinking about possible futures you don’t need all (any!) of the hypothetical possibilities to be “actual”. Not knowing which possibility obtains would result in the same line of argument as when you assume that they all obtain.
Assuming you are being killed if a coin falls up “tails”, only the hypotheticals where all coins fall “heads” will contain you observing the coin toss (so you reason before starting the experiment), so if you do observe that, the hypothesis of “tails” being lethal stands (as you observe before starting the experiment). If on the other hand it’s not lethal, then your observing the coins is possible with other tossing outcomes, so your observing of something else falsifies the lethal-tails hypothesis.
if anthropics save us from ourselves via quantum immortality, that’s a good reason to be less careful, not stop tossing the coin.
There is no saving, the probability of survival is getting reduced. It would be very unlikely to survive a million coin tosses if “tails” are lethal (so you reason before tossing the first coin), but even if you do happen to survive that long, you would still risk being killed with each subsequent coin toss, so you shouldn’t keep doing that if a million “heads” would happen to be your observation (you decide in advance).
I’m not sure why you expect to see LHC failures in the past instead of e.g. either a failure to attain sufficient level of technological development to build LHC, or a vacuum fluctuation preventing destruction of the world. If you wish, a fluctuation which looks just like Higgs.
It’d be trivial to reformulate laws of physics so that anyone who doesn’t observe some interaction dies of vacuum decay.
edit: Also, if you adjust probability of theories based on improbability of your existence given a theory, using Bayes theorem, this anthropic consideration cancels out.
I nominate this for one of the weakest posts ever, and not because the LHC has been operating normally for some time now (if not at full luminosity). It’s weak because it privileges a hypothesis, specifically, the Everettian reasons over many more likely ones for a sequence of failures this complex (and easy to sabotage) machinery might have suffered.
First of all, this has nothing to do with Everett interpretation, and failures of LHC are evidence of its successful start causing end of the world in the same sense as a coin toss resulting in “heads” is evidence that “tails” would kill you. (If you toss a coin a million times while thoroughly investigating and preventing any cause of significant bias, and it always comes up “heads”, this starts looking like a compelling argument to stop tossing the coin; maybe “tails” triggers a gun.)
Privileging of a hypothesis means assigning it a probability that is too high. The post was actually responding to people who were privileging that hypothesis after just one failure, and it considered the quantitative nature of such probability judgments: is there a number of failures such that it constitutes sufficient evidence for this hypothesis to become plausible? How many failures are too many? At the point where you do have enough evidence, the hypothesis is no longer unfairly privileged, it’s pushed up by the strength of evidence, distinguished from alternative explanations.
The anthropic effect can be distinguished from too-complicated-machinery or sabotage reasons when people have worked sufficiently on resolving the technical and securilty difficulties. Suppose people were trying to make LHC and similar machines work for 1000 years and never succeeded, all the while having a very clear theoretical understanding of how it works, and maybe succeeding in running certain experiments, but with the machinery always failing if they decided to run certain other kind of experiments? This would be the kind of miracle where “complex machinery” no longer works as a feasible explanation, while anthropic principle seems to fit.
We didn’t observe an impossible number of LHC failures, so the hypothesis didn’t become more probable, but the general idea (which has nothing to do with LHC) is interesting.
First, how do you reconcile your second statement with the first one? I must be missing something. Second, if anthropics save us from ourselves via quantum immortality, that’s a good reason to be less careful, not stop tossing the coin.
I’m sure there are plenty of examples of problems which ended up being much harder than they appeared, but were eventually solved (Fermat’s last theorem? Human flight?) or will be eventually solved (fusion energy? Machine vision? You name some), all are arguments for the anthropic principle… until they are no longer. They all have specific reasons for failures, unrelated to anthropics. I would keep looking for those reasons and ignore the athropics altogether as an unproductive hypothesis.
What Everett interpretation gives you is some sense of “actuality” of hypotheticals, but when thinking about possible futures you don’t need all (any!) of the hypothetical possibilities to be “actual”. Not knowing which possibility obtains would result in the same line of argument as when you assume that they all obtain.
Assuming you are being killed if a coin falls up “tails”, only the hypotheticals where all coins fall “heads” will contain you observing the coin toss (so you reason before starting the experiment), so if you do observe that, the hypothesis of “tails” being lethal stands (as you observe before starting the experiment). If on the other hand it’s not lethal, then your observing the coins is possible with other tossing outcomes, so your observing of something else falsifies the lethal-tails hypothesis.
There is no saving, the probability of survival is getting reduced. It would be very unlikely to survive a million coin tosses if “tails” are lethal (so you reason before tossing the first coin), but even if you do happen to survive that long, you would still risk being killed with each subsequent coin toss, so you shouldn’t keep doing that if a million “heads” would happen to be your observation (you decide in advance).
I’m not sure why you expect to see LHC failures in the past instead of e.g. either a failure to attain sufficient level of technological development to build LHC, or a vacuum fluctuation preventing destruction of the world. If you wish, a fluctuation which looks just like Higgs.
It’d be trivial to reformulate laws of physics so that anyone who doesn’t observe some interaction dies of vacuum decay.
edit: Also, if you adjust probability of theories based on improbability of your existence given a theory, using Bayes theorem, this anthropic consideration cancels out.