I won’t explicitly analyze the LHC scenario; it’s largely similar to the cold war scenario.
Is it? If you define your universe distribution and sampling rules the same way, you can make the math come out the same in a toy example. But consider actually living in-universe through many cold wars vs. living through many collider failures, and the kind of updates you would make after more and more cold wars that turned out fine vs. more and more failures to turn on the collider.
After living through enough cold wars that look (on public surface-level appearances) like they very narrowly avoided nuclear extinction, perhaps you look into a few of them in more detail, going beyond the easily-publicly-available historical accounts.
Upon investigating more deeply, you might find evidence that, actually, cold wars among humans aren’t all that likely to lead to extinction, for ordinary reasons. (Maybe it turns out that command of the world’s nuclear powers is dense with people like Stanislav Petrov for predictable reasons, or that Petrov-like behavior is actually just pretty common among humans under extreme enough duress. Or just that there are actually more safeguards in place than public surface level history implies, and that cooler heads tend to prevail across a wide variety of situations, for totally ordinary reasons of human nature, once you dig a bit deeper.)
OTOH, suppose you start investigating past collider failures in more detail, and find that all of them just happened to fail or be canceled for (what looks like) totally innocuous but independent reasons, no matter how hard you dig. Before observing a bunch of failures, you start out with a pretty high prior probability that giant particle accelerators are expensive and pretty difficult to build reliably, so it’s not surprising to see a few mechanical failures or project cancellations in a row. After enough observations, you might start updating towards any of the following hypotheses:
Building a collider is a really hard mechanical challenge, but there’s something in human nature that causes the physicists who work on them to have a blind spot and reason incorrectly that it will be easier than it actually is.
There’s some kind of cabal of physicists / collider-building construction workers to purposely fail, in order to keep getting more grants to keep trying again. (This cabal is strong enough to control both the surface level evidence and the evidence you observe when digging deeper.)
Some kind of weird anthropic thing is going on (anthropic principle kicking in, you’re inside a simulation or thought experiment or fictional story, etc.).
Suppose news reports and other surface level information aren’t enough to distinguish between the three hypotheses above, but you decide to dig a bit, and start looking into the failures on your own. When you dig into the mechanical failures, you find that all the failures tend to have happened for totally independent and innocuous physical reasons. (Maybe one of the failures was due to the lead engineer inconveniently getting a piano dropped on his head or something, in a way that, on the surface, sounds suspiciously like some kind of cartoon hi-jinxes. But on closer investigation, you find that there was a totally ordinary moving company moving the piano for totally ordinary reasons, that just happened to drop it accidentally for perfectly predictable reasons, once you know the details. For another failure you investigate, maybe you learn enough about the engineering constraints on particle accelerators to conclude that, actually, avoiding some particular mechanical failure in some particularly finicky part is just really difficult and unlikely.)
Also, you start doing some theoretical physics on your own, and developing your own theories and very approximate / coarse-grained simulations of high-energy physics. You can’t be sure without actually running the experiments (which would require building a big collider), but it starts looking, to you, like it is more and more likely that actually, colliding particles at high enough energy will open a black hole and consume the planet, with high probability.
Given these observations and your investigations, you should probably start updating towards the third hypotheses (weird anthropics stuff) as the explanation for why you’re still alive.
The point is, independent of any arguments about anthropics in general, the way you update in-universe depends on the actual kind and quality of the specific observations you make. In both cases (cold war vs. collider failures), the actual update you make would depend on the specific mechanical failures / nuclear war close-calls that you observe. Depending on how trustworthy and detailed / gears-level your understanding of these failures and non-wars is, you would make different updates. But at some point (depending on your specific observations), you will need to start considering weird anthropics hypotheses as having high probability, even if those hypotheses aren’t directly supported by your individual observations considered independently. On priors, it feels to me like the LHC scenario is one where the anthropics hypotheses could start rising in probability faster and in more worlds than the cold war scenario, but I could see an argument for the other side too.
Depending on the complexity of the scenario and your evidence, doing “actual math” might get difficult or intractable pretty quickly. And personally, I don’t think there’s anything that’s happened anywhere in our actual world so far that makes any kind of weird anthropics hypothesis more than epsilon likely, compared to more ordinary explanations. If that changes in the future though, having detailed mathematical models of anthropic reasoning seems like it would be very useful!
Note that LHC failures would never count as evidence that the LHC would destroy the world. Given such weird observations, you would eventually need to consider the possibility of an anthropic angel. This is not the same as anthropic shadow; it is essentially the opposite. The LHC failures and your theory about black holes implies that the universe works to prevent catastrophes, so you don’t need to worry about it.
Or if you rule out anthropic angels apriori, you just never update; see this section. (Bayesianists should avoid completely ruling out logically possible hypotheses though.)
I think you’re saying that with LHC you could possible attain more certainty about latent risk. With SSA and SIA I think you still reject probability pumping for the some reasons as with the cold war scenario. So eventually you get Bayesian evidence in favor of alternative anthropic theories. The problem is that it’s hard to get generalizable conclusions without specifying that anthropic theory, and the alternative theory to SSA and SIA that gets probability pumping has not been specified.
I think you’re saying that with LHC you could possible attain more certainty about latent risk.
Maybe? It’s more like, for either case, if you’re actually living through it and not just in a toy example, you can investigate the actual evidence available in more detail and update on that, instead of just using the bare prior probabilities. And it’s conceivable that investigation yields evidence that is most easily explained by some kind of simulation / probability pumping / other weird anthropics explanation—even if I can’t write down a formal and generalizable theory about it, I can imagine observing evidence that convinces me pretty strongly I’m e.g. in some weird probability-pumped universe, fictional story, simulation, etc.
To be clear, I don’t think observing surface-level reports of a few repeated mechanical failures, or living through one cold war is enough to start updating towards these hypotheses meaningfully, and I think the way you propose updating under either SSA or SIA in the examples you give is reasonable. It’s just that, if there were enough failures, and then I took the time to actually investigate them, and the investigation turned up (a priori very unlikely) evidence that the failures happened for reasons that looked very suspicious / strange, when considered in aggregate, I might start to reconsider...
So eventually you get Bayesian evidence in favor of alternative anthropic theories.
The reasoning in the comment is not compatible with any prior, since bayesian reasoning from any prior is reflectively consistent. Eventually you get bayesian evidence that the universe hates the LHC in particular.
You should take into account that, assuming material-onlyism, it is far easier for anthropic probability pumps to target neurons than to target bigger structures like LHC copper wires. A few neurons is sufficient to permanently change the world timeline from LHC to non-LHC. Whereas it would take change after change of copper wires or pipes etc to maintain the non-LHC timeline.
Conversely, if you maintain the LHC targeting mechanic over the neuron targeting mechanic, you necessarily have to bite either of whe following bullets, non-materialism (free will is immune to anthropic probability pumping), or that it takes more “probability juice” to target neurons than the LHC itself (ie, the locating difficulty of neurons outweighs the magnitude difficulty of the LHC).
Is it? If you define your universe distribution and sampling rules the same way, you can make the math come out the same in a toy example. But consider actually living in-universe through many cold wars vs. living through many collider failures, and the kind of updates you would make after more and more cold wars that turned out fine vs. more and more failures to turn on the collider.
After living through enough cold wars that look (on public surface-level appearances) like they very narrowly avoided nuclear extinction, perhaps you look into a few of them in more detail, going beyond the easily-publicly-available historical accounts.
Upon investigating more deeply, you might find evidence that, actually, cold wars among humans aren’t all that likely to lead to extinction, for ordinary reasons. (Maybe it turns out that command of the world’s nuclear powers is dense with people like Stanislav Petrov for predictable reasons, or that Petrov-like behavior is actually just pretty common among humans under extreme enough duress. Or just that there are actually more safeguards in place than public surface level history implies, and that cooler heads tend to prevail across a wide variety of situations, for totally ordinary reasons of human nature, once you dig a bit deeper.)
OTOH, suppose you start investigating past collider failures in more detail, and find that all of them just happened to fail or be canceled for (what looks like) totally innocuous but independent reasons, no matter how hard you dig. Before observing a bunch of failures, you start out with a pretty high prior probability that giant particle accelerators are expensive and pretty difficult to build reliably, so it’s not surprising to see a few mechanical failures or project cancellations in a row. After enough observations, you might start updating towards any of the following hypotheses:
Building a collider is a really hard mechanical challenge, but there’s something in human nature that causes the physicists who work on them to have a blind spot and reason incorrectly that it will be easier than it actually is.
There’s some kind of cabal of physicists / collider-building construction workers to purposely fail, in order to keep getting more grants to keep trying again. (This cabal is strong enough to control both the surface level evidence and the evidence you observe when digging deeper.)
Some kind of weird anthropic thing is going on (anthropic principle kicking in, you’re inside a simulation or thought experiment or fictional story, etc.).
Suppose news reports and other surface level information aren’t enough to distinguish between the three hypotheses above, but you decide to dig a bit, and start looking into the failures on your own. When you dig into the mechanical failures, you find that all the failures tend to have happened for totally independent and innocuous physical reasons. (Maybe one of the failures was due to the lead engineer inconveniently getting a piano dropped on his head or something, in a way that, on the surface, sounds suspiciously like some kind of cartoon hi-jinxes. But on closer investigation, you find that there was a totally ordinary moving company moving the piano for totally ordinary reasons, that just happened to drop it accidentally for perfectly predictable reasons, once you know the details. For another failure you investigate, maybe you learn enough about the engineering constraints on particle accelerators to conclude that, actually, avoiding some particular mechanical failure in some particularly finicky part is just really difficult and unlikely.)
Also, you start doing some theoretical physics on your own, and developing your own theories and very approximate / coarse-grained simulations of high-energy physics. You can’t be sure without actually running the experiments (which would require building a big collider), but it starts looking, to you, like it is more and more likely that actually, colliding particles at high enough energy will open a black hole and consume the planet, with high probability.
Given these observations and your investigations, you should probably start updating towards the third hypotheses (weird anthropics stuff) as the explanation for why you’re still alive.
The point is, independent of any arguments about anthropics in general, the way you update in-universe depends on the actual kind and quality of the specific observations you make. In both cases (cold war vs. collider failures), the actual update you make would depend on the specific mechanical failures / nuclear war close-calls that you observe. Depending on how trustworthy and detailed / gears-level your understanding of these failures and non-wars is, you would make different updates. But at some point (depending on your specific observations), you will need to start considering weird anthropics hypotheses as having high probability, even if those hypotheses aren’t directly supported by your individual observations considered independently. On priors, it feels to me like the LHC scenario is one where the anthropics hypotheses could start rising in probability faster and in more worlds than the cold war scenario, but I could see an argument for the other side too.
Depending on the complexity of the scenario and your evidence, doing “actual math” might get difficult or intractable pretty quickly. And personally, I don’t think there’s anything that’s happened anywhere in our actual world so far that makes any kind of weird anthropics hypothesis more than epsilon likely, compared to more ordinary explanations. If that changes in the future though, having detailed mathematical models of anthropic reasoning seems like it would be very useful!
Note that LHC failures would never count as evidence that the LHC would destroy the world. Given such weird observations, you would eventually need to consider the possibility of an anthropic angel. This is not the same as anthropic shadow; it is essentially the opposite. The LHC failures and your theory about black holes implies that the universe works to prevent catastrophes, so you don’t need to worry about it.
Or if you rule out anthropic angels apriori, you just never update; see this section. (Bayesianists should avoid completely ruling out logically possible hypotheses though.)
I think you’re saying that with LHC you could possible attain more certainty about latent risk. With SSA and SIA I think you still reject probability pumping for the some reasons as with the cold war scenario. So eventually you get Bayesian evidence in favor of alternative anthropic theories. The problem is that it’s hard to get generalizable conclusions without specifying that anthropic theory, and the alternative theory to SSA and SIA that gets probability pumping has not been specified.
Maybe? It’s more like, for either case, if you’re actually living through it and not just in a toy example, you can investigate the actual evidence available in more detail and update on that, instead of just using the bare prior probabilities. And it’s conceivable that investigation yields evidence that is most easily explained by some kind of simulation / probability pumping / other weird anthropics explanation—even if I can’t write down a formal and generalizable theory about it, I can imagine observing evidence that convinces me pretty strongly I’m e.g. in some weird probability-pumped universe, fictional story, simulation, etc.
To be clear, I don’t think observing surface-level reports of a few repeated mechanical failures, or living through one cold war is enough to start updating towards these hypotheses meaningfully, and I think the way you propose updating under either SSA or SIA in the examples you give is reasonable. It’s just that, if there were enough failures, and then I took the time to actually investigate them, and the investigation turned up (a priori very unlikely) evidence that the failures happened for reasons that looked very suspicious / strange, when considered in aggregate, I might start to reconsider...
The reasoning in the comment is not compatible with any prior, since bayesian reasoning from any prior is reflectively consistent. Eventually you get bayesian evidence that the universe hates the LHC in particular.
You should take into account that, assuming material-onlyism, it is far easier for anthropic probability pumps to target neurons than to target bigger structures like LHC copper wires. A few neurons is sufficient to permanently change the world timeline from LHC to non-LHC. Whereas it would take change after change of copper wires or pipes etc to maintain the non-LHC timeline.
Conversely, if you maintain the LHC targeting mechanic over the neuron targeting mechanic, you necessarily have to bite either of whe following bullets, non-materialism (free will is immune to anthropic probability pumping), or that it takes more “probability juice” to target neurons than the LHC itself (ie, the locating difficulty of neurons outweighs the magnitude difficulty of the LHC).