I created a somewhat similar plan of x-risks prevention, which is here: http://immortality-roadmap.com/globriskeng.pdf
But in it rising robustness is only part of whole plan, and do not include many of ideas from your plan, which are present in other parts of the map.
In my plan (Plan A3 in the map) robustness consist of several steps:
Step One. Improving sustainability of civilization
• Intrinsically safe critical systems
• Growing diversity of human beings
and habitats
• Universal methods of catastrophe
prevention (resistant structures, strong
medicine)
• Building reserves (food stocks, seeds,
minerals, energy, machinery, knowledge)
• Widely distributed civil defence, including:
temporary shelters,
air and water cleaning systems,
radiation meters, gas masks,
medical kits
mass education
Step Two. Useful ideas to limit the scale of a catastrophe
• Limit the impact of catastrophe by implementing measures to
slow the growth and areas impacted by a catastrophe:
technical instruments for implementing quarantine,
improve the capacity for rapid production of vaccines in response
to emerging threats
grow stockpiles of important vaccines
• Increase preparation time by improving monitoring
and early detection technologies:
support general research on the magnitude of biosecurity
risks and opportunities to reduce them
improve and connect disease surveillance systems so that novel
threats can be detected and responded to more quickly
• Worldwide x-risk prevention exercises
• The ability to quickly adapt to new risks and envision them in advance
Step Three. High-speed Tech Development needed to quickly pass risk window
• Investment in super-technologies (nanotech, biotech, Friendly AI)
• High speed technical progress helps to overcome slow process of
resource depletion
• Invest more in defensive technologies than in offensive
Step Four. Timely achievement of immortality on highest possible level
• Nanotech-based immortal body
• Diversification of humanity into several successor species capable of living in space
• Mind uploading
• Integration with AI
A lot of strong suggestions there—I’ve added subs for example.
Re how to plot a course of action for mitigating these risks, I guess GCRI is doing a lot of the theoretical work on robustness, and they could be augmented by more political lobbying and startup projects?
I created a somewhat similar plan of x-risks prevention, which is here: http://immortality-roadmap.com/globriskeng.pdf But in it rising robustness is only part of whole plan, and do not include many of ideas from your plan, which are present in other parts of the map.
In my plan (Plan A3 in the map) robustness consist of several steps:
Step One. Improving sustainability of civilization • Intrinsically safe critical systems • Growing diversity of human beings and habitats • Universal methods of catastrophe prevention (resistant structures, strong medicine) • Building reserves (food stocks, seeds, minerals, energy, machinery, knowledge) • Widely distributed civil defence, including:
temporary shelters,
air and water cleaning systems,
radiation meters, gas masks,
medical kits
mass education
Step Two. Useful ideas to limit the scale of a catastrophe • Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe:
technical instruments for implementing quarantine,
improve the capacity for rapid production of vaccines in response to emerging threats
grow stockpiles of important vaccines • Increase preparation time by improving monitoring and early detection technologies:
support general research on the magnitude of biosecurity risks and opportunities to reduce them
improve and connect disease surveillance systems so that novel threats can be detected and responded to more quickly • Worldwide x-risk prevention exercises • The ability to quickly adapt to new risks and envision them in advance
Step Three. High-speed Tech Development needed to quickly pass risk window • Investment in super-technologies (nanotech, biotech, Friendly AI) • High speed technical progress helps to overcome slow process of resource depletion • Invest more in defensive technologies than in offensive
Step Four. Timely achievement of immortality on highest possible level • Nanotech-based immortal body • Diversification of humanity into several successor species capable of living in space • Mind uploading • Integration with AI
A lot of strong suggestions there—I’ve added subs for example.
Re how to plot a course of action for mitigating these risks, I guess GCRI is doing a lot of the theoretical work on robustness, and they could be augmented by more political lobbying and startup projects?