One of the main recommendations in the paper is this:
As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the philosophical, scientific, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop arbitrary superhuman AIs.
One might reasonably hope that market forces would have a broadly similar effect. People simply won’t buy unsafe machines (except for perhaps the military!).
However, unilateral selective relinquishment of technologies that facilitate machine intelligence may have the effect of disadvantaging our own efforts in that direction by rendering us impotent—and by ceding the initiative to other parties. That is a strategy that could easily have more costs than benefits. This possibility needs serious consideration before a path of selective relinquishment is taken on anything but a global scale.
As to global selective relinquishment—that involves a considerable coordination problem. We may see global coordination at some stage—but perhaps not long before we see sophisticated machine intelligence. A global plan may simply not be viable.
Would a strategy of biasing development and relinquishment have worked with car safety? There a big part of the problem is that society is prepared to trade speed for lives. Different societies make the tradeoff at different points. Technological approaches to improving safety might possibly help—but they don’t really address one of the main causes of the problem. This perspective leads me to suspect that there are other strategies—besides biasing development and relinquishment—to consider here.
Technological fixes are neat, but we probably shouldn’t just be thinking about them.
One of the main recommendations in the paper is this:
One might reasonably hope that market forces would have a broadly similar effect. People simply won’t buy unsafe machines (except for perhaps the military!).
However, unilateral selective relinquishment of technologies that facilitate machine intelligence may have the effect of disadvantaging our own efforts in that direction by rendering us impotent—and by ceding the initiative to other parties. That is a strategy that could easily have more costs than benefits. This possibility needs serious consideration before a path of selective relinquishment is taken on anything but a global scale.
As to global selective relinquishment—that involves a considerable coordination problem. We may see global coordination at some stage—but perhaps not long before we see sophisticated machine intelligence. A global plan may simply not be viable.
Would a strategy of biasing development and relinquishment have worked with car safety? There a big part of the problem is that society is prepared to trade speed for lives. Different societies make the tradeoff at different points. Technological approaches to improving safety might possibly help—but they don’t really address one of the main causes of the problem. This perspective leads me to suspect that there are other strategies—besides biasing development and relinquishment—to consider here.
Technological fixes are neat, but we probably shouldn’t just be thinking about them.