The reason I introduce Anthropic angels is to avoid a continuity counter-argument. “If you saw a zillion LHC accidents, you’d surely have to agree with the Anthropic shadow, no matter how absurd you claim it is! Thus, a small amount of LHC accidents is a little bit for evidence for it.” Anthropic angels show the answer is no, because LHC accidents are not evidence for the anthropic shadow.
I would be inclined to say that correct anthropic reasoning does normal Bayesian updates but avoids priors that postulate anthropic angels.
If there are weird acasual problems that the Anthropic angel can cause, I’m guessing you can just change your decisions without changing your beliefs. I haven’t thought too hard about it though.
Here, as I understand it, the counterargument is that there is a gap in observations around the size that would be world-ending, so we should fit a model fit smaller tails to match this gap. Such a model seems like “anthropic angels” to me.
No, anthropic angels would literally be some mechanism that saves us from disasters. Like if it turned out Superman is literally real, thinks the LHC is dangerous, and started sabotaging it. Or it could be some mechanism “outside the universe” that rewinds the universe.
Keep in mind that the problems with maximum likelihood have nothing to do with death. That should be the main takeaway from my article, that we shouldn’t use special reasoning to reason about our demise.
In the case of maximum likelihood, it is also bad for:
Suppose you have N coins. If all N coins come up 1, you die. For each coin, you have 50:50 credence about whether it always comes up 0, or if it can also come up 1.
For N=1, it reduces to your case. For N>1, you get an anthropic shadow, which means that even if you’ve had a bunch of flips where you’ve survived, you might actually have to conclude that you’ve got a 1-in-4 chance of dying on your next flip.
Okay, I think our crux comes from the slight ambiguity from the term “anthropic shadow”.
I would not consider that anthropic shadow, because the reasoning has nothing to do with anthropics. Your analysis is correct, but so is the following:
Suppose you have N coins. If all N coins come up 1, you find a diamond in a box. For each coin, you have 50:50 credence about whether it always comes up 0, or if it can also come up 1.
For N>1, you get a diamond shadow, which means that even if you’ve had a bunch of flips where you didn’t find a diamond, you might actually have to conclude that you’ve got a 1-in-4 chance of finding one on your next flip.
The “ghosts are as good as gone” principle implies that death has no special significance when it becomes to bayesian reasoning.
Going back to the LHC example, if the argument worked for vacuum collapse, it would also work for the LHC doing harmless things (like discovering the Higg’s boson or permanently changing the color of the sky or getting a bunch of physics nerds stoked or granting us all immortality or what not) because of this principle (or just directly adapting the argument for vacuum collapse to other uncertain consequences of the LHC).
In the bird example, why would the baguette dropping birds be evidence of “LHC causes vacuum collapse” instead of, say, “LHC does not cause vacuum collapse”? What are the probabilities for the four possible combinations?
I think we basically agree now. I think my original comments were somewhat confused, but also the deadly coin model was somewhat confused. I think the best model is a variant of the N-coin model where one or more of the coins are obscured, and I think in this model your proof goes through to show that you should independently perform Bayesian updates on each coin, and that since you don’t get information about the obscured coin, you should not update on it in a anthropic shadow style way.
The reason I introduce Anthropic angels is to avoid a continuity counter-argument. “If you saw a zillion LHC accidents, you’d surely have to agree with the Anthropic shadow, no matter how absurd you claim it is! Thus, a small amount of LHC accidents is a little bit for evidence for it.” Anthropic angels show the answer is no, because LHC accidents are not evidence for the anthropic shadow.
I think the thing that makes deadly coins different from LHC accidents and skew-distributed death counts is that deadly coins lack variance, so you cannot extrapolate the upper tail from the bulk of the distribution.
We could think of the LHC as having a generating process similar to a log-normal distribution. There are a number of conditions that are necessary before it can be turned on (which would be Bernoulli-distributed variables), as well as settings which affect the extent to which it is turned on (based on the probed energy levels). These all interact multiplicatively, with e.g. it only being turned on if all of the conditions are met. Of course it is not literally lognormally distributed because it is possible for it to not be turned on at all, but otherwise multiplicative interactions are the same as what happens in log-normal distributions. Multiplicatively interacting variables include:
The vacuum must be maintained against leaks
Machinery must be protected against outside forces
Power must be present
It must be configured to probe some energy level
etc.
(Disclaimer: I know little about the LHC.)
If all of the LHC accidents go the same way, say a cleaning company keeps breaking things, then it is reasonable to infer an anthropic angel such as “the cleaning company is incompetent/malevolent and will keep breaking things until you replace them”. On the other hand, if the LHC accidents are highly varied in an anticorrelated way as predicted by the anthropic shadow (e.g. birds become more likely to drop baguettes onto machinery as you increase the energy levels), then I think you can infer an anthropic shadow, because this shows that the latent generator of LHC activation is highly varied, which according to the model would lead to highly varied outputs if not for anthropic shadows.
Not wrong. I haven’t thought in detail about this, maybe there are some better solutions.
No, anthropic angels would literally be some mechanism that saves us from disasters. Like if it turned out Superman is literally real, thinks the LHC is dangerous, and started sabotaging it. Or it could be some mechanism “outside the universe” that rewinds the universe.
… which in turn would result in a distribution of damage with shorter tails, no?
For diseases or war or asteroid impacts where the means by which greater scales causes greater damage, I think an argument along these lines basically goes through.
But for things like the LHC or AI or similar, there is an additional parameter: what is the relationship between scale and damage? (E.g. what energy level does the LHC need to probe before it destroys the earth?)
Some people might want to use anthropic shadow reasoning to conclude things about this parameter. I think something along the lines of OP’s argument goes through to show that you can’t use anthropic shadow reasoning to infer things about this parameter beyond the obvious.
The equivalent coin example would be if you were uncertain about whether the coin will kill you.
The equivalent coin example would be if you were uncertain about whether the coin will kill you.
Or one could use my other multi-coin example, but where there is one of the coins that you can’t see. So you can’t update about whether it is 50:50 or 100:0.
To be clear, Anthropic angels aren’t necessary for this argument to work. My deadly coin example didn’t have one, for example.
The reason I introduce Anthropic angels is to avoid a continuity counter-argument. “If you saw a zillion LHC accidents, you’d surely have to agree with the Anthropic shadow, no matter how absurd you claim it is! Thus, a small amount of LHC accidents is a little bit for evidence for it.” Anthropic angels show the answer is no, because LHC accidents are not evidence for the anthropic shadow.
Like, it seems unnatural to give it literally 0% probability (see 0 And 1 Are Not Probabilities).
If there are weird acasual problems that the Anthropic angel can cause, I’m guessing you can just change your decisions without changing your beliefs. I haven’t thought too hard about it though.
No, anthropic angels would literally be some mechanism that saves us from disasters. Like if it turned out Superman is literally real, thinks the LHC is dangerous, and started sabotaging it. Or it could be some mechanism “outside the universe” that rewinds the universe.
Keep in mind that the problems with maximum likelihood have nothing to do with death. That should be the main takeaway from my article, that we shouldn’t use special reasoning to reason about our demise.
In the case of maximum likelihood, it is also bad for:
Estimating when we will meet aliens
Forecasting the stock market
Being a security guard
etc...
Which is why you should use bayesian reasoning with a good prior instead.
Suppose you have N coins. If all N coins come up 1, you die. For each coin, you have 50:50 credence about whether it always comes up 0, or if it can also come up 1.
For N=1, it reduces to your case. For N>1, you get an anthropic shadow, which means that even if you’ve had a bunch of flips where you’ve survived, you might actually have to conclude that you’ve got a 1-in-4 chance of dying on your next flip.
Okay, I think our crux comes from the slight ambiguity from the term “anthropic shadow”.
I would not consider that anthropic shadow, because the reasoning has nothing to do with anthropics. Your analysis is correct, but so is the following:
The “ghosts are as good as gone” principle implies that death has no special significance when it becomes to bayesian reasoning.
Going back to the LHC example, if the argument worked for vacuum collapse, it would also work for the LHC doing harmless things (like discovering the Higg’s boson or permanently changing the color of the sky or getting a bunch of physics nerds stoked or granting us all immortality or what not) because of this principle (or just directly adapting the argument for vacuum collapse to other uncertain consequences of the LHC).
In the bird example, why would the baguette dropping birds be evidence of “LHC causes vacuum collapse” instead of, say, “LHC does not cause vacuum collapse”? What are the probabilities for the four possible combinations?
I think we basically agree now. I think my original comments were somewhat confused, but also the deadly coin model was somewhat confused. I think the best model is a variant of the N-coin model where one or more of the coins are obscured, and I think in this model your proof goes through to show that you should independently perform Bayesian updates on each coin, and that since you don’t get information about the obscured coin, you should not update on it in a anthropic shadow style way.
I think the thing that makes deadly coins different from LHC accidents and skew-distributed death counts is that deadly coins lack variance, so you cannot extrapolate the upper tail from the bulk of the distribution.
We could think of the LHC as having a generating process similar to a log-normal distribution. There are a number of conditions that are necessary before it can be turned on (which would be Bernoulli-distributed variables), as well as settings which affect the extent to which it is turned on (based on the probed energy levels). These all interact multiplicatively, with e.g. it only being turned on if all of the conditions are met. Of course it is not literally lognormally distributed because it is possible for it to not be turned on at all, but otherwise multiplicative interactions are the same as what happens in log-normal distributions. Multiplicatively interacting variables include:
The vacuum must be maintained against leaks
Machinery must be protected against outside forces
Power must be present
It must be configured to probe some energy level
etc.
(Disclaimer: I know little about the LHC.)
If all of the LHC accidents go the same way, say a cleaning company keeps breaking things, then it is reasonable to infer an anthropic angel such as “the cleaning company is incompetent/malevolent and will keep breaking things until you replace them”. On the other hand, if the LHC accidents are highly varied in an anticorrelated way as predicted by the anthropic shadow (e.g. birds become more likely to drop baguettes onto machinery as you increase the energy levels), then I think you can infer an anthropic shadow, because this shows that the latent generator of LHC activation is highly varied, which according to the model would lead to highly varied outputs if not for anthropic shadows.
Not wrong. I haven’t thought in detail about this, maybe there are some better solutions.
… which in turn would result in a distribution of damage with shorter tails, no?
A further thought:
For diseases or war or asteroid impacts where the means by which greater scales causes greater damage, I think an argument along these lines basically goes through.
But for things like the LHC or AI or similar, there is an additional parameter: what is the relationship between scale and damage? (E.g. what energy level does the LHC need to probe before it destroys the earth?)
Some people might want to use anthropic shadow reasoning to conclude things about this parameter. I think something along the lines of OP’s argument goes through to show that you can’t use anthropic shadow reasoning to infer things about this parameter beyond the obvious.
The equivalent coin example would be if you were uncertain about whether the coin will kill you.
Or one could use my other multi-coin example, but where there is one of the coins that you can’t see. So you can’t update about whether it is 50:50 or 100:0.