(Then perhaps build a second such solution that is orthogonal to the first. And so on, with a stack of redundant and highly orthogonal highly generic solutions, any one of which might be the only thing that works in any given disaster, and which does the job all by itself.)
This is excellent! Can this reasoning be improved by attempting to map the overlaps between x-risks more explicitly? The closest I can think of is some of turchin’s work.
My pretty limited understanding is that this is a fairly standard safety engineering approach.
If you were going to try to make it just a bit more explicit a spreadsheet might be enough. If you want to put serious elbow grease into formal modeling work I think a good keyword to get into the literature might be “fault trees”. The technique came out of Bell Labs in the 1960′s but I think it really came into its own when it was used to model nuclear safety issues in the 1980′s? There’s old Nuclear Regulatory Commission work that got pretty deep here I think.
This is excellent! Can this reasoning be improved by attempting to map the overlaps between x-risks more explicitly? The closest I can think of is some of turchin’s work.
My pretty limited understanding is that this is a fairly standard safety engineering approach.
If you were going to try to make it just a bit more explicit a spreadsheet might be enough. If you want to put serious elbow grease into formal modeling work I think a good keyword to get into the literature might be “fault trees”. The technique came out of Bell Labs in the 1960′s but I think it really came into its own when it was used to model nuclear safety issues in the 1980′s? There’s old Nuclear Regulatory Commission work that got pretty deep here I think.
Yes, here is a fault tree analysis of nuclear war. And here is one for AI.