Google, OpenAI, and other groups are working to create AI, smarter than any human at every mental task. But there’s a problem: they’re using their current “AI” software for narrow tasks. Recognizing faces, completing sentences, playing games. Researchers test things that are easy to measure, not what’s necessarily best by complicated human wants and needs. So the first-ever superhuman AI, will probably be devoted to a “dumb” goal. If it wants to maximize its goal, it’ll use its intelligence to steamroll the things humans value, and we likely couldn’t stop it (since it’s smarter and faster than us). Even if the AI just wants to get “good enough” to satisfy its goal, it could still take extreme actions to prevent anyone from getting in its way, or to increase its own certainty of reaching the goal. Recall Stalin killing millions of people, just to increase his certainty that his enemies had been purged. (Policymakers)
Google, OpenAI, and other groups are working to create AI, smarter than any human at every mental task. But there’s a problem: they’re using their current “AI” software for narrow tasks. Recognizing faces, completing sentences, playing games. Researchers test things that are easy to measure, not what’s necessarily best by complicated human wants and needs. So the first-ever superhuman AI, will probably be devoted to a “dumb” goal. If it wants to maximize its goal, it’ll use its intelligence to steamroll the things humans value, and we likely couldn’t stop it (since it’s smarter and faster than us). Even if the AI just wants to get “good enough” to satisfy its goal, it could still take extreme actions to prevent anyone from getting in its way, or to increase its own certainty of reaching the goal. Recall Stalin killing millions of people, just to increase his certainty that his enemies had been purged. (Policymakers)