For diseases or war or asteroid impacts where the means by which greater scales causes greater damage, I think an argument along these lines basically goes through.
But for things like the LHC or AI or similar, there is an additional parameter: what is the relationship between scale and damage? (E.g. what energy level does the LHC need to probe before it destroys the earth?)
Some people might want to use anthropic shadow reasoning to conclude things about this parameter. I think something along the lines of OP’s argument goes through to show that you can’t use anthropic shadow reasoning to infer things about this parameter beyond the obvious.
The equivalent coin example would be if you were uncertain about whether the coin will kill you.
The equivalent coin example would be if you were uncertain about whether the coin will kill you.
Or one could use my other multi-coin example, but where there is one of the coins that you can’t see. So you can’t update about whether it is 50:50 or 100:0.
A further thought:
For diseases or war or asteroid impacts where the means by which greater scales causes greater damage, I think an argument along these lines basically goes through.
But for things like the LHC or AI or similar, there is an additional parameter: what is the relationship between scale and damage? (E.g. what energy level does the LHC need to probe before it destroys the earth?)
Some people might want to use anthropic shadow reasoning to conclude things about this parameter. I think something along the lines of OP’s argument goes through to show that you can’t use anthropic shadow reasoning to infer things about this parameter beyond the obvious.
The equivalent coin example would be if you were uncertain about whether the coin will kill you.
Or one could use my other multi-coin example, but where there is one of the coins that you can’t see. So you can’t update about whether it is 50:50 or 100:0.