I wasn’t using the Trinity case when I wrote that part. This part assumes that we will develop some technology X capable of destroying life, and that we’ll also develop technology to defend against it. Then say each century sees 1 attack using technology X that will destroy life if it succeeds. (This may be, for instance, from a crazy or religious or just very very angry person.) You need to defend successfully every time. It’s actually much worse, because there will probably be more than one Technology X.
If you think about existing equilibria between attackers and defenders, such as spammers vs. spam filters, it seems unlikely that, once technology has stopped developing, every dangerous technology X will have such a highly-effective defense Y against it. The priors, I would think, would be that (averaged over possible worlds) you would have something more like a 50% chance of stopping any given attack.
I wasn’t using the Trinity case when I wrote that part. This part assumes that we will develop some technology X capable of destroying life, and that we’ll also develop technology to defend against it. Then say each century sees 1 attack using technology X that will destroy life if it succeeds. (This may be, for instance, from a crazy or religious or just very very angry person.) You need to defend successfully every time. It’s actually much worse, because there will probably be more than one Technology X.
If you think about existing equilibria between attackers and defenders, such as spammers vs. spam filters, it seems unlikely that, once technology has stopped developing, every dangerous technology X will have such a highly-effective defense Y against it. The priors, I would think, would be that (averaged over possible worlds) you would have something more like a 50% chance of stopping any given attack.