LessWrong should offer a short pitch on why AI will be dangerous, and this aims to be that.1
Many people think, “Why would humanity make dangerous AI? Seems like a stupid idea. Can’t we just make the safe kind?” No.Humanity will make dangerous AI for the same reason we made every other technology dangerous: it’s more useful.
A knife sharp enough to cut fruit can cut your finger. Electrical outlets with enough power to run your refrigerator can stop your heart. A car with enough horsepower to carry your family up a hill can easily kill pedestrians. Useful systems must be dangerous because useful implies they can have large effects on their environment. “Large effects” can be good or bad depending on your perspective.
We’ll make AI powerful too: knowledgeable enough to cure diseases, tactical enough to outsmart terrorists, and capable enough to run an economy by itself, all so we can relax! That means our AI will also be knowledgeable enough to invent new diseases, tactical enough to outsmart freedom fighters, and responsible enough to run a military-industrial complex all by itself.
We won’t do the latter on purpose any more than we crash cars or cut our fingers on purpose, the problem is that therearemorewaysforcomplexsystemstogo “wrong” than “right” becausethisuniverseispsychopathic, even when nobody is being malicious. The asteroid that killed the dinosaurs wasn’t malicious, it just didn’t care. Nowhere in the behavior of the asteroid was encoded a consideration of life, or even itself. It just followed mathematical laws (of gravity) which don’t care about “morality”.
Like the asteroid, AI systems just follow mathematical laws (of intelligence) which don’t care about “morality”. We’ll add safety components that can consider moral questions, but mistakes will be made as have been made with all previous technologies.
The odds of a mistake go up a lot when you consider that the whole selling point of AI is to be smarter than us, and that entities smarter than you are unpredictable. They will do things you didn’t think of, sometimes things you didn’t even know were possible. AI presents a unique and unfamiliar danger to humanity, because unlike other technologies, an AI disaster might not wait around for you to come clean it up. That wouldn’t be very intelligent, would it?
Footnotes
1. Most posts are long, detailed, and use esoteric language. Not good for first impressions. Entry level posts should be easier to find, too.
Why will AI be dangerous?
LessWrong should offer a short pitch on why AI will be dangerous, and this aims to be that.1
Many people think, “Why would humanity make dangerous AI? Seems like a stupid idea. Can’t we just make the safe kind?” No. Humanity will make dangerous AI for the same reason we made every other technology dangerous: it’s more useful.
A knife sharp enough to cut fruit can cut your finger. Electrical outlets with enough power to run your refrigerator can stop your heart. A car with enough horsepower to carry your family up a hill can easily kill pedestrians. Useful systems must be dangerous because useful implies they can have large effects on their environment. “Large effects” can be good or bad depending on your perspective.
We’ll make AI powerful too: knowledgeable enough to cure diseases, tactical enough to outsmart terrorists, and capable enough to run an economy by itself, all so we can relax! That means our AI will also be knowledgeable enough to invent new diseases, tactical enough to outsmart freedom fighters, and responsible enough to run a military-industrial complex all by itself.
We won’t do the latter on purpose any more than we crash cars or cut our fingers on purpose, the problem is that there are more ways for complex systems to go “wrong” than “right” because this universe is psychopathic, even when nobody is being malicious. The asteroid that killed the dinosaurs wasn’t malicious, it just didn’t care. Nowhere in the behavior of the asteroid was encoded a consideration of life, or even itself. It just followed mathematical laws (of gravity) which don’t care about “morality”.
Like the asteroid, AI systems just follow mathematical laws (of intelligence) which don’t care about “morality”. We’ll add safety components that can consider moral questions, but mistakes will be made as have been made with all previous technologies.
The odds of a mistake go up a lot when you consider that the whole selling point of AI is to be smarter than us, and that entities smarter than you are unpredictable. They will do things you didn’t think of, sometimes things you didn’t even know were possible. AI presents a unique and unfamiliar danger to humanity, because unlike other technologies, an AI disaster might not wait around for you to come clean it up. That wouldn’t be very intelligent, would it?
Footnotes
1. Most posts are long, detailed, and use esoteric language. Not good for first impressions. Entry level posts should be easier to find, too.