Totally agree that some x-risks are non-agential, such as (a) risks from nature, and (b) risks produced by coordination problems, resulting in e.g. climate change and biodiversity loss. As for superpowers, I would classify them as (7). Thoughts? Any further suggestions? :-)
“Rogue country” is outside evaluative characteristic.
Lets try to define “rogue country” by its estimation-independent characteristics:
1) It is country which fight for world domination
2) It is a country which is interested in worldwide promotion of its (crazy) ideology (USSR, communism)
3) Its a country which survival is threatened by risks of aggression
4) It is a country which is ruled by crazy dictator.
I would like to say that superpowers is the type of “rogue countries”, as they sometimes combine some of listed above properties.
The difference is mainly that we always had two (or three) superpowers which fight for the world domination. Sometimes one of them was on the first place and another one was challenging its position as world leader. The second superpower is more willing to create global risk, as it may rise it “status” or chances to overpower “alpha-superpower”.
The topic is interesting, and there a lot what could be said on it including current political situation and even war in Syria. Just read an article today which explained this war from this point of view.
I would also add Doomsday blackmailers. These are rational agents which would create Doomsday Machine to blackmail the world with the goal of world domination.
Another option worth considering is arogant scientists, who benefit personally from dangerous experiments. Example is CERN proceeded with LHC before its safety was proven. Another group of bioscientists excavated 1918 pandemic flu, sequenced it and posted it in the internet. And another scientist deliberately created new superflu studying genetic variation which could make birds flu stronger.
We could imagine a scientist who would to increase personal longevity by gene therapy, even if it poses 1 per cent pandemic risk. And if there are many of them...
Also there is a possible class of agents who try to create smaller catastrophe in order to prevent larger catastrophe. Recent movie “Inferno” is about it, where a character created a virus to kill half humanity to safe all humanity later.
Totally agree that some x-risks are non-agential, such as (a) risks from nature, and (b) risks produced by coordination problems, resulting in e.g. climate change and biodiversity loss. As for superpowers, I would classify them as (7). Thoughts? Any further suggestions? :-)
“Rogue country” is outside evaluative characteristic.
Lets try to define “rogue country” by its estimation-independent characteristics: 1) It is country which fight for world domination 2) It is a country which is interested in worldwide promotion of its (crazy) ideology (USSR, communism) 3) Its a country which survival is threatened by risks of aggression 4) It is a country which is ruled by crazy dictator.
I would like to say that superpowers is the type of “rogue countries”, as they sometimes combine some of listed above properties.
The difference is mainly that we always had two (or three) superpowers which fight for the world domination. Sometimes one of them was on the first place and another one was challenging its position as world leader. The second superpower is more willing to create global risk, as it may rise it “status” or chances to overpower “alpha-superpower”.
The topic is interesting, and there a lot what could be said on it including current political situation and even war in Syria. Just read an article today which explained this war from this point of view.
I would also add Doomsday blackmailers. These are rational agents which would create Doomsday Machine to blackmail the world with the goal of world domination.
Another option worth considering is arogant scientists, who benefit personally from dangerous experiments. Example is CERN proceeded with LHC before its safety was proven. Another group of bioscientists excavated 1918 pandemic flu, sequenced it and posted it in the internet. And another scientist deliberately created new superflu studying genetic variation which could make birds flu stronger. We could imagine a scientist who would to increase personal longevity by gene therapy, even if it poses 1 per cent pandemic risk. And if there are many of them...
Also there is a possible class of agents who try to create smaller catastrophe in order to prevent larger catastrophe. Recent movie “Inferno” is about it, where a character created a virus to kill half humanity to safe all humanity later.
I listed all my ideas in my agent map, which is here on Less Wrong http://lesswrong.com/r/discussion/lw/o0m/the_map_of_agents_which_may_create_xrisks/