I would like to compile a list of the AI doom scenarios that most people (especially politicians) will understand, and will agree that the scenarios are realistic and facts-based. A few examples:
There are AIs that help to invent new chemicals. For example, Alphafold helps with designing new proteins. Such AIs can be used to design and improve biological weapons (e.g. by making Ebola even more deadly). The North Korea, ISIS, or even some resourceful loner could use AI to create a virus capable of killing billions of people.
Several major countries are working on automating their militaries. For example, a US defense contractor Palantir has announced a system (“Palantir AIP”) that automates a lot of high-level military decision making with AI. China is working on similar systems. If the trend continues, AI will penetrate all levels of the military command. More and more decisions will be delegated to AI. But even the smartest AIs are not error-free. A trusted AI that seems to be reliable in most situations, could make a deadly mistake in an unusual situation. This could lead to all sorts of dangerous scenarios, including an avalanche-like escalation between the American and the Chinese “automatic generals”, greatly increasing the risk of a nuclear war.
Social networks, targeted ads, and chatbot trolls are already used to sow division, to promote radical ideologies, and to help dangerous populists to win elections. The smarter are AIs, the more efficient is their usage to manipulate the public opinion. This makes it easier for radical movements to gain traction, and for insane people to become presidents. The next bin Laden and the next Hitler will gain power thanks to AI. And this time it will be much easier for them to develop weapons of mass destruction (e.g. AI-designed bio weapons).
It’s worth realizing that a lot of “this doesn’t seem like a problem” reaction from politicians and “the public” is actually cover for “I don’t want this to be a problem, and I see lots of visible and immediate harm from proposed solutions”. That second part is the True Objection—without a relatively painless and/or near-guaranteed solution, there’s no incentive to acknowledge the problem.
I think https://scottaaronson.blog/?p=7266 might be a good overview. It is kind of pointless to focus on specific ways everyone dies, since it is easy to argue with each specific one. The whole point is that Doom is disjunctive. Like trying to dam or plug a single channel of a river’s delta, there will be another stream flowing somewhat differently to flood the basin, and the process is adversarial and anti-inductive.
I don’t think it is pointless to focus on specific ways everyone dies, unless there is a single strategy that addresses every possible way everyone dies.
If FOOM isn’t likely but something like thisis likely, it seems really unlikely to me that the approach of “continue to focus on strategies that rely on a single agent having a high level of control over the world” is still optimal (or, more accurately, it’s probably still a good idea to have some people working on that but not all the people).
I just started a writing contest for detailed scenarios on how we get from our current scenario to AI ending the world. I want to compile the results on a website so we have an easily shareable link with more scenarios than can be ad hoc dismissed, because individual scenarios taken from a huge list are easy to argue against and thus discredit the list, but a critical mass of them presented at once defeats this effect. If anyone has good examples I’ll add them to the website.
[Question] Realistic near-future scenarios of AI doom understandable for non-techy people?
I would like to compile a list of the AI doom scenarios that most people (especially politicians) will understand, and will agree that the scenarios are realistic and facts-based. A few examples:
There are AIs that help to invent new chemicals. For example, Alphafold helps with designing new proteins. Such AIs can be used to design and improve biological weapons (e.g. by making Ebola even more deadly). The North Korea, ISIS, or even some resourceful loner could use AI to create a virus capable of killing billions of people.
Several major countries are working on automating their militaries. For example, a US defense contractor Palantir has announced a system (“Palantir AIP”) that automates a lot of high-level military decision making with AI. China is working on similar systems. If the trend continues, AI will penetrate all levels of the military command. More and more decisions will be delegated to AI. But even the smartest AIs are not error-free. A trusted AI that seems to be reliable in most situations, could make a deadly mistake in an unusual situation. This could lead to all sorts of dangerous scenarios, including an avalanche-like escalation between the American and the Chinese “automatic generals”, greatly increasing the risk of a nuclear war.
Social networks, targeted ads, and chatbot trolls are already used to sow division, to promote radical ideologies, and to help dangerous populists to win elections. The smarter are AIs, the more efficient is their usage to manipulate the public opinion. This makes it easier for radical movements to gain traction, and for insane people to become presidents. The next bin Laden and the next Hitler will gain power thanks to AI. And this time it will be much easier for them to develop weapons of mass destruction (e.g. AI-designed bio weapons).
What are some other such scenarios?
It’s worth realizing that a lot of “this doesn’t seem like a problem” reaction from politicians and “the public” is actually cover for “I don’t want this to be a problem, and I see lots of visible and immediate harm from proposed solutions”. That second part is the True Objection—without a relatively painless and/or near-guaranteed solution, there’s no incentive to acknowledge the problem.
I think https://scottaaronson.blog/?p=7266 might be a good overview. It is kind of pointless to focus on specific ways everyone dies, since it is easy to argue with each specific one. The whole point is that Doom is disjunctive. Like trying to dam or plug a single channel of a river’s delta, there will be another stream flowing somewhat differently to flood the basin, and the process is adversarial and anti-inductive.
I don’t think it is pointless to focus on specific ways everyone dies, unless there is a single strategy that addresses every possible way everyone dies.
If FOOM isn’t likely but something like this is likely, it seems really unlikely to me that the approach of “continue to focus on strategies that rely on a single agent having a high level of control over the world” is still optimal (or, more accurately, it’s probably still a good idea to have some people working on that but not all the people).
I just started a writing contest for detailed scenarios on how we get from our current scenario to AI ending the world. I want to compile the results on a website so we have an easily shareable link with more scenarios than can be ad hoc dismissed, because individual scenarios taken from a huge list are easy to argue against and thus discredit the list, but a critical mass of them presented at once defeats this effect. If anyone has good examples I’ll add them to the website.