This assumes that a successful attack will end life with P~=1 and a successful attack will occur once per century, which seems, to put it mildly, excessive.
As I understood your original assumption, each century sees one event with P=3/1M of destruction, independent of any intervention. If an intervention has a 1⁄3 failure rate, and you intervene every time, this would reduce your chance of annihilation/century to 1/1M, which is your goal.
It’s quite possible we’re thinking of different things when we say failure rate; I mean the failure rate of the defensive measure; I think you mean the failure rate as in the pure odds the world blows up.
I wasn’t using the Trinity case when I wrote that part. This part assumes that we will develop some technology X capable of destroying life, and that we’ll also develop technology to defend against it. Then say each century sees 1 attack using technology X that will destroy life if it succeeds. (This may be, for instance, from a crazy or religious or just very very angry person.) You need to defend successfully every time. It’s actually much worse, because there will probably be more than one Technology X.
If you think about existing equilibria between attackers and defenders, such as spammers vs. spam filters, it seems unlikely that, once technology has stopped developing, every dangerous technology X will have such a highly-effective defense Y against it. The priors, I would think, would be that (averaged over possible worlds) you would have something more like a 50% chance of stopping any given attack.
This assumes that a successful attack will end life with P~=1 and a successful attack will occur once per century, which seems, to put it mildly, excessive.
As I understood your original assumption, each century sees one event with P=3/1M of destruction, independent of any intervention. If an intervention has a 1⁄3 failure rate, and you intervene every time, this would reduce your chance of annihilation/century to 1/1M, which is your goal.
It’s quite possible we’re thinking of different things when we say failure rate; I mean the failure rate of the defensive measure; I think you mean the failure rate as in the pure odds the world blows up.
I wasn’t using the Trinity case when I wrote that part. This part assumes that we will develop some technology X capable of destroying life, and that we’ll also develop technology to defend against it. Then say each century sees 1 attack using technology X that will destroy life if it succeeds. (This may be, for instance, from a crazy or religious or just very very angry person.) You need to defend successfully every time. It’s actually much worse, because there will probably be more than one Technology X.
If you think about existing equilibria between attackers and defenders, such as spammers vs. spam filters, it seems unlikely that, once technology has stopped developing, every dangerous technology X will have such a highly-effective defense Y against it. The priors, I would think, would be that (averaged over possible worlds) you would have something more like a 50% chance of stopping any given attack.