Of course, not only US government, but of all other countries, which have potential to influence AI research, existential risks. For example North Korea could play important role in existential risks, as it is said to develop small pox bioweapons.
In my opinion, we need global government to address existential risks, and AI which will take over the world will be a form of global government. I was routinely downvoted for such posts and comments in LW, so it probably not appropriate place to discuss these issues.
Smallpox isn’t an existential risk—existential risks affect the continuation of the human race. So far as I know, the big ones are UFAI and asteroid strike.
I don’t know of classifications for very serious but smaller risks.
Look, common smallpox is not existential risk, but biological weapons could be if they were specially designed to be existential risk. The simplest way to do it is simulanious use of many different pathogens. If we have 10 viruses with 50 per cent mortality, it would mean 1000 times reduction of human population, and this last million people would be very scattered and unadapted, so they could continue to extinction. North korea is said to develope 8 different bioweapons, but with progress of biotechnology it could be hundreds.
But my main idea here was not a classification of existential risks, but to adress the idea that preventing them is the question of global politic—or it least it should be if we want to survive.
Infectious agents with high mortality rates tend to weed themselves out of the population. There’s a sweet spot for infectious disease; prolific enough to pass themselves on, not so prolific as to kill their host before they got the opportunity. Additionally, there’s a strong negative feedback to particularly nasty disease in the form of quarantine.
A much bigger risk to my mind actually comes from healthcare, which can push that sweet spot further into the “mortal peril” section. Healthcare provokes an arms race with infectious agents; the better we are at treating disease and keeping it from killing people, the more dangerous an infectious agent can be and still successfully propagate.
Why just the US government?
Of course, not only US government, but of all other countries, which have potential to influence AI research, existential risks. For example North Korea could play important role in existential risks, as it is said to develop small pox bioweapons. In my opinion, we need global government to address existential risks, and AI which will take over the world will be a form of global government. I was routinely downvoted for such posts and comments in LW, so it probably not appropriate place to discuss these issues.
Smallpox isn’t an existential risk—existential risks affect the continuation of the human race. So far as I know, the big ones are UFAI and asteroid strike.
I don’t know of classifications for very serious but smaller risks.
Look, common smallpox is not existential risk, but biological weapons could be if they were specially designed to be existential risk. The simplest way to do it is simulanious use of many different pathogens. If we have 10 viruses with 50 per cent mortality, it would mean 1000 times reduction of human population, and this last million people would be very scattered and unadapted, so they could continue to extinction. North korea is said to develope 8 different bioweapons, but with progress of biotechnology it could be hundreds. But my main idea here was not a classification of existential risks, but to adress the idea that preventing them is the question of global politic—or it least it should be if we want to survive.
Infectious agents with high mortality rates tend to weed themselves out of the population. There’s a sweet spot for infectious disease; prolific enough to pass themselves on, not so prolific as to kill their host before they got the opportunity. Additionally, there’s a strong negative feedback to particularly nasty disease in the form of quarantine.
A much bigger risk to my mind actually comes from healthcare, which can push that sweet spot further into the “mortal peril” section. Healthcare provokes an arms race with infectious agents; the better we are at treating disease and keeping it from killing people, the more dangerous an infectious agent can be and still successfully propagate.