In plans: 1. Is not “voluntary or forced devolution” the same as “ludism” and “relinquishment of dangerous science” which is already in the plan?
I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness?
Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engineering. “ludism” and “relinquishment of dangerous science” is a way to restrict what technologies we use, but note that we are still capable of using and creating these technologies. Devolution, perhaps there is a better word for it, would be something like the dumbing down of all or most humans so that they are no longer capable of using or creating the technologies that could make them less purely human.
I think that “some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware” is basically the same idea as “smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)”, but your wording is excellent.
Yes you are right. I guess I was more implying man-made catastrophes which are created in order to cause a paradigmatic change rather than natural ones.
I still don’t know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.
I’m not sure either. I would think you could do it by changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Perhaps, this is just AI and improved computational modelling.
This idea of needing control of the world seems extremely dangerous to me. Although, I suppose a top-down approach could solve the problems.
I think that you should also think about what a good bottom-up approach would be. How do we make local communities and societies more resilient, economical and capable of facing potential X-risks.
In survive the catastrophe I would add two extra boxes:
Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. For example, with pandemics you could: improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeasure
Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and responded to more quickly
Technically, I wouldn’t say we’d lost it if the price of sperm donation rose (from its current negative level) until it stopped being an efficient means of reproduction. But I think you underestimate the threat of regular evolution making a lot of similar changes, if you somehow froze some environment for a long time.
Not only does going back to our main ancestral environment seem unworkable—at least without a superhuman AI to manage it! - we should also consider the possibility that our moral urges are a mixed bag derived from many environments, not optimized for any.
A question: is it possible to create risk control system, which is not based on centralized power, like bitcoin is not based on central banking?
For example: local police could handle local crime and terrorists; local health authorities could find and prevent disease spread. If we have many x-risks peers, they could control their neighborhood in their professional space.
Counter example: how it could help in situations like ISIS or other rogue state, which is going (may be) to create a doomsday machine or virus which will be used to blackmail or exterminate other countries?
bitcoinis an electronic payment system based on cryptographic proof instead of trust. I think the big difference between it and the risk control system is the need for enforcement i.e. changing what other people can and can’t do. There seems to be two components to the risk control system: prediction of what should be researched and enforcement of this. The prediction component doesn’t need to come from a centralised power. It could just come from the scientific community. I would think that the enforcement would need to come from a centralised power. I guess that there does need to be a way to stop the centralized power causing X-risks. Perhaps, this could come from a localised and distributed effort. Maybe, something like a better version of anonymous.
The idea of dumbing people is also present in Bad plan section, “limitation of human or collective intelligence”…
But the main idea of preventing human extinction is, by definition to ensure that at least several examples of Homo sapienses are still alive in any given point of time. It is not the best possible definition. It should also include posthumans if they based on humans and share a lot of their properties (and as Bostrom said: could realise full human potential). In fact, we can’t said what is really good before we solve Friendly AI problem. And if we know what is good, we could also said what is worst outcome, and so constitute existential catastrophe.
But real catastrophe which could happen in 21 century is far from such sophisticated problems of determination ultimate good, human nature and full human potential. It is clear visible physical process of destruction.
There are some ideas of down to top solving problems of control, like idea of transparent society by David Brin, where vigilants will scan the web and video sensors searching for terrorists. So it would be not hierarchical control but net based, pr peer to peer.
I like two extra boxes, but for now I already spent my prize budget two times, which unexpectedly put me in controversial situation: as author of the map I want to make the best and most inclusive map, but as a owner of prize fund (which I pay from personal money earned selling art) I feel my self more screwy :)
Don’t worry about the money. Just like the comments if they are useful.
In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past.
Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.
Technological precognition does not cover time travel, because it too much fantastic. We may include scientific study of claims about precognitive dreams, as such study will become soon possible with live brain scans of sleeping people and dream recording.
Time travel could have its own x-risks, like well known grandfather problem.
Lowering human intelligence is in bad plans.
I have been thinking about hive mind… It may be a way to create safe AI, which will be based on humans and use their brains as free and cheep supercomputers via some kind of neuro-interface. But in fact contemporary science as whole is an example of such distributed AI.
If a hive mind is enforced, it is like worst totalitarian state… If it does not include all humans, the rest will fight against it, and may use very powerful weapons to safe their identity. It is already happen as fight between globalists and anti-globalists.
I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness? Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engineering. “ludism” and “relinquishment of dangerous science” is a way to restrict what technologies we use, but note that we are still capable of using and creating these technologies. Devolution, perhaps there is a better word for it, would be something like the dumbing down of all or most humans so that they are no longer capable of using or creating the technologies that could make them less purely human.
Yes you are right. I guess I was more implying man-made catastrophes which are created in order to cause a paradigmatic change rather than natural ones.
I’m not sure either. I would think you could do it by changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Perhaps, this is just AI and improved computational modelling. This idea of needing control of the world seems extremely dangerous to me. Although, I suppose a top-down approach could solve the problems. I think that you should also think about what a good bottom-up approach would be. How do we make local communities and societies more resilient, economical and capable of facing potential X-risks.
In survive the catastrophe I would add two extra boxes:
Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. For example, with pandemics you could: improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeasure
Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and responded to more quickly
Send it to one of the charities here.
Technically, I wouldn’t say we’d lost it if the price of sperm donation rose (from its current negative level) until it stopped being an efficient means of reproduction. But I think you underestimate the threat of regular evolution making a lot of similar changes, if you somehow froze some environment for a long time.
Not only does going back to our main ancestral environment seem unworkable—at least without a superhuman AI to manage it! - we should also consider the possibility that our moral urges are a mixed bag derived from many environments, not optimized for any.
A question: is it possible to create risk control system, which is not based on centralized power, like bitcoin is not based on central banking?
For example: local police could handle local crime and terrorists; local health authorities could find and prevent disease spread. If we have many x-risks peers, they could control their neighborhood in their professional space.
Counter example: how it could help in situations like ISIS or other rogue state, which is going (may be) to create a doomsday machine or virus which will be used to blackmail or exterminate other countries?
bitcoinis an electronic payment system based on cryptographic proof instead of trust. I think the big difference between it and the risk control system is the need for enforcement i.e. changing what other people can and can’t do. There seems to be two components to the risk control system: prediction of what should be researched and enforcement of this. The prediction component doesn’t need to come from a centralised power. It could just come from the scientific community. I would think that the enforcement would need to come from a centralised power. I guess that there does need to be a way to stop the centralized power causing X-risks. Perhaps, this could come from a localised and distributed effort. Maybe, something like a better version of anonymous.
Sent 150 USD to Against Malaria foundation.
The idea of dumbing people is also present in Bad plan section, “limitation of human or collective intelligence”… But the main idea of preventing human extinction is, by definition to ensure that at least several examples of Homo sapienses are still alive in any given point of time. It is not the best possible definition. It should also include posthumans if they based on humans and share a lot of their properties (and as Bostrom said: could realise full human potential). In fact, we can’t said what is really good before we solve Friendly AI problem. And if we know what is good, we could also said what is worst outcome, and so constitute existential catastrophe. But real catastrophe which could happen in 21 century is far from such sophisticated problems of determination ultimate good, human nature and full human potential. It is clear visible physical process of destruction.
There are some ideas of down to top solving problems of control, like idea of transparent society by David Brin, where vigilants will scan the web and video sensors searching for terrorists. So it would be not hierarchical control but net based, pr peer to peer.
I like two extra boxes, but for now I already spent my prize budget two times, which unexpectedly put me in controversial situation: as author of the map I want to make the best and most inclusive map, but as a owner of prize fund (which I pay from personal money earned selling art) I feel my self more screwy :)
Don’t worry about the money. Just like the comments if they are useful. In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past. Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.
I uploaded new version of the map with changes marked in blue. http://immortality-roadmap.com/globriskeng.pdf
Technological precognition does not cover time travel, because it too much fantastic. We may include scientific study of claims about precognitive dreams, as such study will become soon possible with live brain scans of sleeping people and dream recording. Time travel could have its own x-risks, like well known grandfather problem.
Lowering human intelligence is in bad plans.
I have been thinking about hive mind… It may be a way to create safe AI, which will be based on humans and use their brains as free and cheep supercomputers via some kind of neuro-interface. But in fact contemporary science as whole is an example of such distributed AI.
If a hive mind is enforced, it is like worst totalitarian state… If it does not include all humans, the rest will fight against it, and may use very powerful weapons to safe their identity. It is already happen as fight between globalists and anti-globalists.