The reason I introduce Anthropic angels is to avoid a continuity counter-argument. “If you saw a zillion LHC accidents, you’d surely have to agree with the Anthropic shadow, no matter how absurd you claim it is! Thus, a small amount of LHC accidents is a little bit for evidence for it.” Anthropic angels show the answer is no, because LHC accidents are not evidence for the anthropic shadow.
I think the thing that makes deadly coins different from LHC accidents and skew-distributed death counts is that deadly coins lack variance, so you cannot extrapolate the upper tail from the bulk of the distribution.
We could think of the LHC as having a generating process similar to a log-normal distribution. There are a number of conditions that are necessary before it can be turned on (which would be Bernoulli-distributed variables), as well as settings which affect the extent to which it is turned on (based on the probed energy levels). These all interact multiplicatively, with e.g. it only being turned on if all of the conditions are met. Of course it is not literally lognormally distributed because it is possible for it to not be turned on at all, but otherwise multiplicative interactions are the same as what happens in log-normal distributions. Multiplicatively interacting variables include:
The vacuum must be maintained against leaks
Machinery must be protected against outside forces
Power must be present
It must be configured to probe some energy level
etc.
(Disclaimer: I know little about the LHC.)
If all of the LHC accidents go the same way, say a cleaning company keeps breaking things, then it is reasonable to infer an anthropic angel such as “the cleaning company is incompetent/malevolent and will keep breaking things until you replace them”. On the other hand, if the LHC accidents are highly varied in an anticorrelated way as predicted by the anthropic shadow (e.g. birds become more likely to drop baguettes onto machinery as you increase the energy levels), then I think you can infer an anthropic shadow, because this shows that the latent generator of LHC activation is highly varied, which according to the model would lead to highly varied outputs if not for anthropic shadows.
Not wrong. I haven’t thought in detail about this, maybe there are some better solutions.
No, anthropic angels would literally be some mechanism that saves us from disasters. Like if it turned out Superman is literally real, thinks the LHC is dangerous, and started sabotaging it. Or it could be some mechanism “outside the universe” that rewinds the universe.
… which in turn would result in a distribution of damage with shorter tails, no?
For diseases or war or asteroid impacts where the means by which greater scales causes greater damage, I think an argument along these lines basically goes through.
But for things like the LHC or AI or similar, there is an additional parameter: what is the relationship between scale and damage? (E.g. what energy level does the LHC need to probe before it destroys the earth?)
Some people might want to use anthropic shadow reasoning to conclude things about this parameter. I think something along the lines of OP’s argument goes through to show that you can’t use anthropic shadow reasoning to infer things about this parameter beyond the obvious.
The equivalent coin example would be if you were uncertain about whether the coin will kill you.
The equivalent coin example would be if you were uncertain about whether the coin will kill you.
Or one could use my other multi-coin example, but where there is one of the coins that you can’t see. So you can’t update about whether it is 50:50 or 100:0.
I think the thing that makes deadly coins different from LHC accidents and skew-distributed death counts is that deadly coins lack variance, so you cannot extrapolate the upper tail from the bulk of the distribution.
We could think of the LHC as having a generating process similar to a log-normal distribution. There are a number of conditions that are necessary before it can be turned on (which would be Bernoulli-distributed variables), as well as settings which affect the extent to which it is turned on (based on the probed energy levels). These all interact multiplicatively, with e.g. it only being turned on if all of the conditions are met. Of course it is not literally lognormally distributed because it is possible for it to not be turned on at all, but otherwise multiplicative interactions are the same as what happens in log-normal distributions. Multiplicatively interacting variables include:
The vacuum must be maintained against leaks
Machinery must be protected against outside forces
Power must be present
It must be configured to probe some energy level
etc.
(Disclaimer: I know little about the LHC.)
If all of the LHC accidents go the same way, say a cleaning company keeps breaking things, then it is reasonable to infer an anthropic angel such as “the cleaning company is incompetent/malevolent and will keep breaking things until you replace them”. On the other hand, if the LHC accidents are highly varied in an anticorrelated way as predicted by the anthropic shadow (e.g. birds become more likely to drop baguettes onto machinery as you increase the energy levels), then I think you can infer an anthropic shadow, because this shows that the latent generator of LHC activation is highly varied, which according to the model would lead to highly varied outputs if not for anthropic shadows.
Not wrong. I haven’t thought in detail about this, maybe there are some better solutions.
… which in turn would result in a distribution of damage with shorter tails, no?
A further thought:
For diseases or war or asteroid impacts where the means by which greater scales causes greater damage, I think an argument along these lines basically goes through.
But for things like the LHC or AI or similar, there is an additional parameter: what is the relationship between scale and damage? (E.g. what energy level does the LHC need to probe before it destroys the earth?)
Some people might want to use anthropic shadow reasoning to conclude things about this parameter. I think something along the lines of OP’s argument goes through to show that you can’t use anthropic shadow reasoning to infer things about this parameter beyond the obvious.
The equivalent coin example would be if you were uncertain about whether the coin will kill you.
Or one could use my other multi-coin example, but where there is one of the coins that you can’t see. So you can’t update about whether it is 50:50 or 100:0.