I would use the word resilient rather than robust.
Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.
Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.
I think that it is a better idea to think about this from a system perspective rather than the specific X-risks or plans that we know about or think are cool. We want to avoid the availability bias. I would assume that there are more X-risks and plans that we are unaware of then we are aware of.
I recommend adding in the risks and relating them to the plans as most of your plans if they fail would lead to other risks. I would do this in a generic way. An example to demonstrate what I am talking about is: with a risk tragedy of the commons and a plan to create a more capable type of intelligent life form that will uphold, improve and maintain the interests of humanity. This could be done by genetic engineering and AI to create new life forms. And, Nanotechnology and biotechnology could be used to change existing humans. The potential risk of this plan is that it leads to the creation of other intelligent species that will inevitably compete with humans.
One more recommendation is to remove the time line from the road map and just have the risks and plans. The timeline would be useful in the explanation text you are creating. I like this categorisation of X risks:
Bangs (extinction) – Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.
Crunches (permanent stagnation) – The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.
Shrieks (flawed realization) – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.
Whimpers(subsequent ruination) – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.
I don’t want this post to be too long, so I have just listed the common systems problems below:
Policy Resistance – Fixes that Fail
Tragedy of the Commons
Drift to Low Performance
Escalation
Success to the Successful
Shifting the Burden to the Intervenor—Addiction
Rule Beating
Seeking the Wrong Goal
Limits to Growth
Four additional plans are:
(in Controlled regression) voluntary or forced devolution
uploading human consciousness into a super computer
some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware
dramatic societal changes to avoid some existential risks like the over use of resources. An example of this is in the book: The world inside.
You talk about being saved by non-human intelligence, but it is also possible that SETI could actually cause hostile aliens to find us. A potential plan might be to stop SETI and try to hide. The opposite plan (seeking out aliens) seems as plausible though.
I accepted your idea about replacing the word “robust” and will award the prize for it.
The main idea of this roadmap is to escape availability bias by listing all known ideas for x-risk prevention. This map will be accompanied by the map of all known x-risks which is ready and will be published soon. More than 100 x-risks have been identified and evaluated.
The idea that some of plans create their own risks is represented in this map with red boxes below plan A1.
But it may be possible to create completely different future risks and prevention map using system approach, or something like a scenarios tree.
Yes, each plan is better to contain specific risks. A1 is better to contain biotech and nanotech risks, A2 is better for UFAI, A3 for nuclear war and biotech an so on. So another map may be useful to correspond risks and prevention methods.
Timeline was already partly replaced with “steps”, as was already suggested by “elo” and he was awarded for it.
I still don’t know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.
In plans: 1. Is not “voluntary or forced devolution” the same as “ludism” and “relinquishment of dangerous science” which is already in the plan?
The idea of uploding was already suggested here in the form of “migrating into simulation” and was awarded.
I think that “some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware” is basically the same idea as “smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)”, but your wording is excellent.
I think I should accept “dramatic social changes”, as it could include many interesting but different topics: demise of capitalism, hipster revolution, internet connectivity, global village, dissolving of national states. I got many suggestions in this line and I could unite them under this topic.
Do you mean METI—messaging to stars? Yes, it dangerous, and we should do it only if everything else fails. That is why I put in into plan C.
But by the way SETI is even more dangerous as we could download decryption of alien AI. I have an article about it here:
http://lesswrong.com/lw/gzv/risks_of_downloading_alien_ai_via_seti_search/
Thank for your suggestions which were the first ones which imply creation of the map on completely different principles.
So in total for now I suggest you 2 award and one from Romashka, total 150 usd.
Your username suggests me that you would prefer to keep anonymity, so I could send money to a charity of your choice.
In plans: 1. Is not “voluntary or forced devolution” the same as “ludism” and “relinquishment of dangerous science” which is already in the plan?
I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness?
Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engineering. “ludism” and “relinquishment of dangerous science” is a way to restrict what technologies we use, but note that we are still capable of using and creating these technologies. Devolution, perhaps there is a better word for it, would be something like the dumbing down of all or most humans so that they are no longer capable of using or creating the technologies that could make them less purely human.
I think that “some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware” is basically the same idea as “smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)”, but your wording is excellent.
Yes you are right. I guess I was more implying man-made catastrophes which are created in order to cause a paradigmatic change rather than natural ones.
I still don’t know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.
I’m not sure either. I would think you could do it by changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Perhaps, this is just AI and improved computational modelling.
This idea of needing control of the world seems extremely dangerous to me. Although, I suppose a top-down approach could solve the problems.
I think that you should also think about what a good bottom-up approach would be. How do we make local communities and societies more resilient, economical and capable of facing potential X-risks.
In survive the catastrophe I would add two extra boxes:
Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. For example, with pandemics you could: improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeasure
Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and responded to more quickly
Technically, I wouldn’t say we’d lost it if the price of sperm donation rose (from its current negative level) until it stopped being an efficient means of reproduction. But I think you underestimate the threat of regular evolution making a lot of similar changes, if you somehow froze some environment for a long time.
Not only does going back to our main ancestral environment seem unworkable—at least without a superhuman AI to manage it! - we should also consider the possibility that our moral urges are a mixed bag derived from many environments, not optimized for any.
A question: is it possible to create risk control system, which is not based on centralized power, like bitcoin is not based on central banking?
For example: local police could handle local crime and terrorists; local health authorities could find and prevent disease spread. If we have many x-risks peers, they could control their neighborhood in their professional space.
Counter example: how it could help in situations like ISIS or other rogue state, which is going (may be) to create a doomsday machine or virus which will be used to blackmail or exterminate other countries?
bitcoinis an electronic payment system based on cryptographic proof instead of trust. I think the big difference between it and the risk control system is the need for enforcement i.e. changing what other people can and can’t do. There seems to be two components to the risk control system: prediction of what should be researched and enforcement of this. The prediction component doesn’t need to come from a centralised power. It could just come from the scientific community. I would think that the enforcement would need to come from a centralised power. I guess that there does need to be a way to stop the centralized power causing X-risks. Perhaps, this could come from a localised and distributed effort. Maybe, something like a better version of anonymous.
The idea of dumbing people is also present in Bad plan section, “limitation of human or collective intelligence”…
But the main idea of preventing human extinction is, by definition to ensure that at least several examples of Homo sapienses are still alive in any given point of time. It is not the best possible definition. It should also include posthumans if they based on humans and share a lot of their properties (and as Bostrom said: could realise full human potential). In fact, we can’t said what is really good before we solve Friendly AI problem. And if we know what is good, we could also said what is worst outcome, and so constitute existential catastrophe.
But real catastrophe which could happen in 21 century is far from such sophisticated problems of determination ultimate good, human nature and full human potential. It is clear visible physical process of destruction.
There are some ideas of down to top solving problems of control, like idea of transparent society by David Brin, where vigilants will scan the web and video sensors searching for terrorists. So it would be not hierarchical control but net based, pr peer to peer.
I like two extra boxes, but for now I already spent my prize budget two times, which unexpectedly put me in controversial situation: as author of the map I want to make the best and most inclusive map, but as a owner of prize fund (which I pay from personal money earned selling art) I feel my self more screwy :)
Don’t worry about the money. Just like the comments if they are useful.
In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past.
Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.
Technological precognition does not cover time travel, because it too much fantastic. We may include scientific study of claims about precognitive dreams, as such study will become soon possible with live brain scans of sleeping people and dream recording.
Time travel could have its own x-risks, like well known grandfather problem.
Lowering human intelligence is in bad plans.
I have been thinking about hive mind… It may be a way to create safe AI, which will be based on humans and use their brains as free and cheep supercomputers via some kind of neuro-interface. But in fact contemporary science as whole is an example of such distributed AI.
If a hive mind is enforced, it is like worst totalitarian state… If it does not include all humans, the rest will fight against it, and may use very powerful weapons to safe their identity. It is already happen as fight between globalists and anti-globalists.
I would use the word resilient rather than robust.
Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.
Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.
I think that it is a better idea to think about this from a system perspective rather than the specific X-risks or plans that we know about or think are cool. We want to avoid the availability bias. I would assume that there are more X-risks and plans that we are unaware of then we are aware of.
I recommend adding in the risks and relating them to the plans as most of your plans if they fail would lead to other risks. I would do this in a generic way. An example to demonstrate what I am talking about is: with a risk tragedy of the commons and a plan to create a more capable type of intelligent life form that will uphold, improve and maintain the interests of humanity. This could be done by genetic engineering and AI to create new life forms. And, Nanotechnology and biotechnology could be used to change existing humans. The potential risk of this plan is that it leads to the creation of other intelligent species that will inevitably compete with humans.
One more recommendation is to remove the time line from the road map and just have the risks and plans. The timeline would be useful in the explanation text you are creating. I like this categorisation of X risks:
Bangs (extinction) – Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.
Crunches (permanent stagnation) – The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.
Shrieks (flawed realization) – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.
Whimpers(subsequent ruination) – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.
I don’t want this post to be too long, so I have just listed the common systems problems below:
Policy Resistance – Fixes that Fail
Tragedy of the Commons
Drift to Low Performance
Escalation
Success to the Successful
Shifting the Burden to the Intervenor—Addiction
Rule Beating
Seeking the Wrong Goal
Limits to Growth
Four additional plans are:
(in Controlled regression) voluntary or forced devolution
uploading human consciousness into a super computer
some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware
dramatic societal changes to avoid some existential risks like the over use of resources. An example of this is in the book: The world inside.
You talk about being saved by non-human intelligence, but it is also possible that SETI could actually cause hostile aliens to find us. A potential plan might be to stop SETI and try to hide. The opposite plan (seeking out aliens) seems as plausible though.
I accepted your idea about replacing the word “robust” and will award the prize for it.
The main idea of this roadmap is to escape availability bias by listing all known ideas for x-risk prevention. This map will be accompanied by the map of all known x-risks which is ready and will be published soon. More than 100 x-risks have been identified and evaluated.
The idea that some of plans create their own risks is represented in this map with red boxes below plan A1.
But it may be possible to create completely different future risks and prevention map using system approach, or something like a scenarios tree.
Yes, each plan is better to contain specific risks. A1 is better to contain biotech and nanotech risks, A2 is better for UFAI, A3 for nuclear war and biotech an so on. So another map may be useful to correspond risks and prevention methods.
Timeline was already partly replaced with “steps”, as was already suggested by “elo” and he was awarded for it.
Phil Torres shows that Bostroms classification of x-risks is not as good as it seems to be, in: http://ieet.org/index.php/IEET/more/torres20150121 So I prefer the notion of “human extinction risks” as more clear.
I still don’t know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.
In plans: 1. Is not “voluntary or forced devolution” the same as “ludism” and “relinquishment of dangerous science” which is already in the plan?
The idea of uploding was already suggested here in the form of “migrating into simulation” and was awarded.
I think that “some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware” is basically the same idea as “smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)”, but your wording is excellent.
I think I should accept “dramatic social changes”, as it could include many interesting but different topics: demise of capitalism, hipster revolution, internet connectivity, global village, dissolving of national states. I got many suggestions in this line and I could unite them under this topic.
Do you mean METI—messaging to stars? Yes, it dangerous, and we should do it only if everything else fails. That is why I put in into plan C. But by the way SETI is even more dangerous as we could download decryption of alien AI. I have an article about it here: http://lesswrong.com/lw/gzv/risks_of_downloading_alien_ai_via_seti_search/
Thank for your suggestions which were the first ones which imply creation of the map on completely different principles.
So in total for now I suggest you 2 award and one from Romashka, total 150 usd. Your username suggests me that you would prefer to keep anonymity, so I could send money to a charity of your choice.
I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness? Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engineering. “ludism” and “relinquishment of dangerous science” is a way to restrict what technologies we use, but note that we are still capable of using and creating these technologies. Devolution, perhaps there is a better word for it, would be something like the dumbing down of all or most humans so that they are no longer capable of using or creating the technologies that could make them less purely human.
Yes you are right. I guess I was more implying man-made catastrophes which are created in order to cause a paradigmatic change rather than natural ones.
I’m not sure either. I would think you could do it by changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Perhaps, this is just AI and improved computational modelling. This idea of needing control of the world seems extremely dangerous to me. Although, I suppose a top-down approach could solve the problems. I think that you should also think about what a good bottom-up approach would be. How do we make local communities and societies more resilient, economical and capable of facing potential X-risks.
In survive the catastrophe I would add two extra boxes:
Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. For example, with pandemics you could: improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeasure
Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and responded to more quickly
Send it to one of the charities here.
Technically, I wouldn’t say we’d lost it if the price of sperm donation rose (from its current negative level) until it stopped being an efficient means of reproduction. But I think you underestimate the threat of regular evolution making a lot of similar changes, if you somehow froze some environment for a long time.
Not only does going back to our main ancestral environment seem unworkable—at least without a superhuman AI to manage it! - we should also consider the possibility that our moral urges are a mixed bag derived from many environments, not optimized for any.
A question: is it possible to create risk control system, which is not based on centralized power, like bitcoin is not based on central banking?
For example: local police could handle local crime and terrorists; local health authorities could find and prevent disease spread. If we have many x-risks peers, they could control their neighborhood in their professional space.
Counter example: how it could help in situations like ISIS or other rogue state, which is going (may be) to create a doomsday machine or virus which will be used to blackmail or exterminate other countries?
bitcoinis an electronic payment system based on cryptographic proof instead of trust. I think the big difference between it and the risk control system is the need for enforcement i.e. changing what other people can and can’t do. There seems to be two components to the risk control system: prediction of what should be researched and enforcement of this. The prediction component doesn’t need to come from a centralised power. It could just come from the scientific community. I would think that the enforcement would need to come from a centralised power. I guess that there does need to be a way to stop the centralized power causing X-risks. Perhaps, this could come from a localised and distributed effort. Maybe, something like a better version of anonymous.
Sent 150 USD to Against Malaria foundation.
The idea of dumbing people is also present in Bad plan section, “limitation of human or collective intelligence”… But the main idea of preventing human extinction is, by definition to ensure that at least several examples of Homo sapienses are still alive in any given point of time. It is not the best possible definition. It should also include posthumans if they based on humans and share a lot of their properties (and as Bostrom said: could realise full human potential). In fact, we can’t said what is really good before we solve Friendly AI problem. And if we know what is good, we could also said what is worst outcome, and so constitute existential catastrophe. But real catastrophe which could happen in 21 century is far from such sophisticated problems of determination ultimate good, human nature and full human potential. It is clear visible physical process of destruction.
There are some ideas of down to top solving problems of control, like idea of transparent society by David Brin, where vigilants will scan the web and video sensors searching for terrorists. So it would be not hierarchical control but net based, pr peer to peer.
I like two extra boxes, but for now I already spent my prize budget two times, which unexpectedly put me in controversial situation: as author of the map I want to make the best and most inclusive map, but as a owner of prize fund (which I pay from personal money earned selling art) I feel my self more screwy :)
Don’t worry about the money. Just like the comments if they are useful. In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past. Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.
I uploaded new version of the map with changes marked in blue. http://immortality-roadmap.com/globriskeng.pdf
Technological precognition does not cover time travel, because it too much fantastic. We may include scientific study of claims about precognitive dreams, as such study will become soon possible with live brain scans of sleeping people and dream recording. Time travel could have its own x-risks, like well known grandfather problem.
Lowering human intelligence is in bad plans.
I have been thinking about hive mind… It may be a way to create safe AI, which will be based on humans and use their brains as free and cheep supercomputers via some kind of neuro-interface. But in fact contemporary science as whole is an example of such distributed AI.
If a hive mind is enforced, it is like worst totalitarian state… If it does not include all humans, the rest will fight against it, and may use very powerful weapons to safe their identity. It is already happen as fight between globalists and anti-globalists.
This is useful. Mr. Turchin, please redirect my award to Satoshi.
Done