Therefore, serious design work on a self-replicating spacecraft should have a high priority.
What you’re suggesting will reduce uncertainty, but won’t change the mean probability. Suppose we assume the remaining filter risk is p. Then you’re proposing a course of action which, if successful, would reduce it to q < p. So:
Either we reduce p to q immediately without doing any work (because we “could have worked on a self-replicating spacecraft”, which gives us as much probability benefit as actually doing it), or it means that there is a residual great filter risk (p-q) between now and the completion of the project. This great filter risk would likely come from the project itself.
Essentially your model is playing Russian roulette, and you’re encouraging us to shoot ourselves rapidly rather than slowly. This would make it clearer faster what the risk actually is, but wouldn’t reduce the risk.
What you’re suggesting will reduce uncertainty, but won’t change the mean probability. Suppose we assume the remaining filter risk is p. Then you’re proposing a course of action which, if successful, would reduce it to q < p. So:
Either we reduce p to q immediately without doing any work (because we “could have worked on a self-replicating spacecraft”, which gives us as much probability benefit as actually doing it), or it means that there is a residual great filter risk (p-q) between now and the completion of the project. This great filter risk would likely come from the project itself.
Essentially your model is playing Russian roulette, and you’re encouraging us to shoot ourselves rapidly rather than slowly. This would make it clearer faster what the risk actually is, but wouldn’t reduce the risk.