I think that you raise a crucial point. I find it challenging to explain to people that AI is likely very dangerous. It‘s much easier to explain that pandemics, nuclear wars or environmental crises are dangerous. I think this is mainly due to the abstractness of AI and the concreteness of those other dangers, leading to availability bias.
The most common counterarguments I’ve heard from people about why AI isn’t a serious risk are:
AI is impossible, and it is just “mechanical” and lacks some magical properties only humans have.
When we build AIs, we will not embed them with negative human traits such as hate, anger and vengeance.
Technology has been the most significant driver of improvements in human well-being, and there’s no solid evidence that this will change.
I have found that comparing the relationship between humans and chimpanzees with humans and hypothetical AIs is a good explanation that people find compelling. There’s plenty of evidence suggesting that chimpanzees are pretty intelligent, but they are just not nearly intelligent enough to influence human decision making. This has resulted in chimps spending their lives in captivity across the globe.
Another good explanation is based on insurance. The probability that your house will be destroyed is small, but it’s still prudent to buy home insurance. Suppose you believe that the likelihood that AI will be dangerous is small. Is it not wise that we insure ourselves by dedicating resources towards the development of safe AI?
As another short argument:
We don’t need an argument for why AI is dangerous, because dangerous is the default state of powerful things. There needs to be a reason AI would be safe.
I think that you raise a crucial point. I find it challenging to explain to people that AI is likely very dangerous. It‘s much easier to explain that pandemics, nuclear wars or environmental crises are dangerous. I think this is mainly due to the abstractness of AI and the concreteness of those other dangers, leading to availability bias.
The most common counterarguments I’ve heard from people about why AI isn’t a serious risk are:
AI is impossible, and it is just “mechanical” and lacks some magical properties only humans have.
When we build AIs, we will not embed them with negative human traits such as hate, anger and vengeance.
Technology has been the most significant driver of improvements in human well-being, and there’s no solid evidence that this will change.
I have found that comparing the relationship between humans and chimpanzees with humans and hypothetical AIs is a good explanation that people find compelling. There’s plenty of evidence suggesting that chimpanzees are pretty intelligent, but they are just not nearly intelligent enough to influence human decision making. This has resulted in chimps spending their lives in captivity across the globe.
Another good explanation is based on insurance. The probability that your house will be destroyed is small, but it’s still prudent to buy home insurance. Suppose you believe that the likelihood that AI will be dangerous is small. Is it not wise that we insure ourselves by dedicating resources towards the development of safe AI?
As another short argument: We don’t need an argument for why AI is dangerous, because dangerous is the default state of powerful things. There needs to be a reason AI would be safe.