We’re talking about bringing existential threats to chances less than 1 in a million per century. I don’t know of any defensive technology that can guarantee a less than 1 in a million failure rate.
Under your theory of 3/1M/Century, you’d only need to do better than a 1⁄3 failure rate to lower chances to 1/1M/C. A 1⁄3 failure rate seems rather plausible. If the defense had a 1/1M failure rate, you’d have a 3⁄1,000,000,000,000 chance of eradication per century.
Assume that there is at least one attack per century, and a successful attack will end life. Therefore, you need a failure rate less than 1 in a million to survive a million centuries.
This assumes that a successful attack will end life with P~=1 and a successful attack will occur once per century, which seems, to put it mildly, excessive.
As I understood your original assumption, each century sees one event with P=3/1M of destruction, independent of any intervention. If an intervention has a 1⁄3 failure rate, and you intervene every time, this would reduce your chance of annihilation/century to 1/1M, which is your goal.
It’s quite possible we’re thinking of different things when we say failure rate; I mean the failure rate of the defensive measure; I think you mean the failure rate as in the pure odds the world blows up.
I wasn’t using the Trinity case when I wrote that part. This part assumes that we will develop some technology X capable of destroying life, and that we’ll also develop technology to defend against it. Then say each century sees 1 attack using technology X that will destroy life if it succeeds. (This may be, for instance, from a crazy or religious or just very very angry person.) You need to defend successfully every time. It’s actually much worse, because there will probably be more than one Technology X.
If you think about existing equilibria between attackers and defenders, such as spammers vs. spam filters, it seems unlikely that, once technology has stopped developing, every dangerous technology X will have such a highly-effective defense Y against it. The priors, I would think, would be that (averaged over possible worlds) you would have something more like a 50% chance of stopping any given attack.
Every orgainism you see has is the result of an unbroken chain of non-extinction that stretches back some 4 billion years. The rate of complete failure for living systems is not known—but it appears to have been extremely low so far.
Under your theory of 3/1M/Century, you’d only need to do better than a 1⁄3 failure rate to lower chances to 1/1M/C. A 1⁄3 failure rate seems rather plausible. If the defense had a 1/1M failure rate, you’d have a 3⁄1,000,000,000,000 chance of eradication per century.
Assume that there is at least one attack per century, and a successful attack will end life. Therefore, you need a failure rate less than 1 in a million to survive a million centuries.
This assumes that a successful attack will end life with P~=1 and a successful attack will occur once per century, which seems, to put it mildly, excessive.
As I understood your original assumption, each century sees one event with P=3/1M of destruction, independent of any intervention. If an intervention has a 1⁄3 failure rate, and you intervene every time, this would reduce your chance of annihilation/century to 1/1M, which is your goal.
It’s quite possible we’re thinking of different things when we say failure rate; I mean the failure rate of the defensive measure; I think you mean the failure rate as in the pure odds the world blows up.
I wasn’t using the Trinity case when I wrote that part. This part assumes that we will develop some technology X capable of destroying life, and that we’ll also develop technology to defend against it. Then say each century sees 1 attack using technology X that will destroy life if it succeeds. (This may be, for instance, from a crazy or religious or just very very angry person.) You need to defend successfully every time. It’s actually much worse, because there will probably be more than one Technology X.
If you think about existing equilibria between attackers and defenders, such as spammers vs. spam filters, it seems unlikely that, once technology has stopped developing, every dangerous technology X will have such a highly-effective defense Y against it. The priors, I would think, would be that (averaged over possible worlds) you would have something more like a 50% chance of stopping any given attack.
Every orgainism you see has is the result of an unbroken chain of non-extinction that stretches back some 4 billion years. The rate of complete failure for living systems is not known—but it appears to have been extremely low so far.
Time compression did not start recently. (Well, it did, once you account for time compression.)
Bacteria have limited technological capabilities.