The idea that AI is a threat to the human race by being smarter than us, is an old one. The reason for the panic now is that we are seeing new breakthroughs in AI every month or so, but the theory and practice of safely developing superhuman AI barely exists. Apparently the people leading the charge towards superhuman AI, trust that they will figure out how to avoid danger along the way, or think that they can’t afford to let the competition get ahead, or… who knows what they’re thinking.
For some time I have insisted that the appropriate response to this situation (for people who see the danger, and have the ability to contribute to AI theory), is to try to solve the problem, i.e. design human-friendly superhuman AI. You can’t count on convincing everyone to go slowly, and you can’t certainly can’t count on the world’s superpowers to force everyone to go slowly. Someone has to directly solve the problem.
I have also been insisting that June Ku’s MetaEthical.AI is the most advanced blueprint we have. I am planning to make a discussion post about it, since it has received surprisingly little attention.
I agree with your second paragraph (and most of your first paragraph). Also, “going slowly” doesn’t solve the problem on its own; you still need to solve alignment sooner or later.
The idea that AI is a threat to the human race by being smarter than us, is an old one. The reason for the panic now is that we are seeing new breakthroughs in AI every month or so, but the theory and practice of safely developing superhuman AI barely exists. Apparently the people leading the charge towards superhuman AI, trust that they will figure out how to avoid danger along the way, or think that they can’t afford to let the competition get ahead, or… who knows what they’re thinking.
For some time I have insisted that the appropriate response to this situation (for people who see the danger, and have the ability to contribute to AI theory), is to try to solve the problem, i.e. design human-friendly superhuman AI. You can’t count on convincing everyone to go slowly, and you can’t certainly can’t count on the world’s superpowers to force everyone to go slowly. Someone has to directly solve the problem.
I have also been insisting that June Ku’s MetaEthical.AI is the most advanced blueprint we have. I am planning to make a discussion post about it, since it has received surprisingly little attention.
I agree with your second paragraph (and most of your first paragraph). Also, “going slowly” doesn’t solve the problem on its own; you still need to solve alignment sooner or later.