In a previous post, I mused that we might be focusing too much on general intelligences, and that the route to powerful and dangerous intelligences might go through much more specialised intelligences instead. Since it’s easier to reason with an example, here is a potentially deadly narrow AI (partially due to Toby Ord). Feel free to comment and improve on it, or suggest you own example.
It’s the standard “pathological goal AI” but only a narrow intelligence. Imagine a medicine designing super-AI with the goal of reducing human mortality in 50 years—i.e. massively reducing human population in the next 49 years. It’s a narrow intelligence, so it has access only to a huge amount of human biological and epidemiological research. It must gets its drugs past FDA approval; this requirement is encoded as certain physical reactions (no death, some health improvements) to people taking the drugs over the course of a few years.
Then it seems trivial for it to design a drug that would have no negative impact for the first few years, and then causes sterility or death. Since it wants to spread this to as many humans as possible, it would probably design something that interacted with common human pathogens—colds, flues—in order to spread the impact, rather than affecting only those that took the disease.
Now, this narrow intelligence is less threatening than if it had general intelligence—where it could also plan for possible human countermeasures and such—but it seems sufficiently dangerous on its own that we can’t afford to worry only about general intelligences. Some of the “AI superpowers” that Nick mentions in his book (intelligence amplification, strategizing, social manipulation, hacking, technology research, economic productivity) could be enough to cause devastation on their own, even if the AI never developed other abilities.
We still could be destroyed by a machine that we outmatch in almost every area.
An example of deadly non-general AI
In a previous post, I mused that we might be focusing too much on general intelligences, and that the route to powerful and dangerous intelligences might go through much more specialised intelligences instead. Since it’s easier to reason with an example, here is a potentially deadly narrow AI (partially due to Toby Ord). Feel free to comment and improve on it, or suggest you own example.
It’s the standard “pathological goal AI” but only a narrow intelligence. Imagine a medicine designing super-AI with the goal of reducing human mortality in 50 years—i.e. massively reducing human population in the next 49 years. It’s a narrow intelligence, so it has access only to a huge amount of human biological and epidemiological research. It must gets its drugs past FDA approval; this requirement is encoded as certain physical reactions (no death, some health improvements) to people taking the drugs over the course of a few years.
Then it seems trivial for it to design a drug that would have no negative impact for the first few years, and then causes sterility or death. Since it wants to spread this to as many humans as possible, it would probably design something that interacted with common human pathogens—colds, flues—in order to spread the impact, rather than affecting only those that took the disease.
Now, this narrow intelligence is less threatening than if it had general intelligence—where it could also plan for possible human countermeasures and such—but it seems sufficiently dangerous on its own that we can’t afford to worry only about general intelligences. Some of the “AI superpowers” that Nick mentions in his book (intelligence amplification, strategizing, social manipulation, hacking, technology research, economic productivity) could be enough to cause devastation on their own, even if the AI never developed other abilities.
We still could be destroyed by a machine that we outmatch in almost every area.