The fact that—unlike the case of the nuclear war where the quality of the threat was visible to politicians and the public alike—alignment seems to be a problem which not even all AI researchers understand is worth mentioning. That in itself probably excludes the possibility of a direct political solution. But even politics in the narrow sense can be utilized with a bit of creativity (e.g. by providing politicians a motivation more direct than saving the world, grounded on things they can understand without believing weird-sounding claims of cultish-looking folks).
unlike the case of the nuclear war where the quality of the threat was visible to politicians and the public alike—alignment seems to be a problem which not even all AI researchers understand is worth mentioning. That in itself probably excludes the possibility of a direct political solution.
The failure to recognise/understand/appreciate the problem does seem an important factor. And if it were utterly unchangeable, maybe that would mean all efforts need to just go towards technical solutions. But it’s not utterly unchangeable; in fact, it’s a key variable which “political” (or just “not purely technical”) efforts could intervene on to reduce AI x-risk.
E.g., a lot of EA movement building, outreach by AI safety researchers, Stuart Russell’s book Human-Compatible, etc., is partly targeting at getting more AI researchers (and/or the broader public or certain elites like politicians) to recognise/understand/appreciate the problem. And by doing so, it could have other benefits like increasing the amount of technical work on AI safety, influencing policies that reduce risks of different AI groups rushing to the finish line and compromising on safety, etc.
I think this was actually somewhat similar in the case of nuclear war. Of course, the basic fact that nuclear weapons could be very harmful is a lot more obvious than the fact AGI/superintelligence could be very harmful. But the main x-risk from nuclear war is nuclear winter, and that’s not immediately obvious—it requires some quite modelling, and is something unlike things people have seen in their lifetimes, really. And according to Toby Ord in The Precipice (page 65):
The discovery that atomic weapons may trigger a nuclear winter influenced both Ronald Reagan and Mikhail Gorbachev to reduce their country’s arms to avoid war.
So in that case, technical work on understanding the problem was communicated to politicians (this communication being a non-technical intervention), and helped make the potential harms clearer to politicians, which helped lead to a political (partial) solution.
Basically, I think that technical and non-technical interventions are often intertwined or support each other, and that we should see current levels of recognition that AI risk is a big deal as something we can and should intervene to change, not as something fixed.
The fact that—unlike the case of the nuclear war where the quality of the threat was visible to politicians and the public alike—alignment seems to be a problem which not even all AI researchers understand is worth mentioning. That in itself probably excludes the possibility of a direct political solution. But even politics in the narrow sense can be utilized with a bit of creativity (e.g. by providing politicians a motivation more direct than saving the world, grounded on things they can understand without believing weird-sounding claims of cultish-looking folks).
(Very late to this thread)
The failure to recognise/understand/appreciate the problem does seem an important factor. And if it were utterly unchangeable, maybe that would mean all efforts need to just go towards technical solutions. But it’s not utterly unchangeable; in fact, it’s a key variable which “political” (or just “not purely technical”) efforts could intervene on to reduce AI x-risk.
E.g., a lot of EA movement building, outreach by AI safety researchers, Stuart Russell’s book Human-Compatible, etc., is partly targeting at getting more AI researchers (and/or the broader public or certain elites like politicians) to recognise/understand/appreciate the problem. And by doing so, it could have other benefits like increasing the amount of technical work on AI safety, influencing policies that reduce risks of different AI groups rushing to the finish line and compromising on safety, etc.
I think this was actually somewhat similar in the case of nuclear war. Of course, the basic fact that nuclear weapons could be very harmful is a lot more obvious than the fact AGI/superintelligence could be very harmful. But the main x-risk from nuclear war is nuclear winter, and that’s not immediately obvious—it requires some quite modelling, and is something unlike things people have seen in their lifetimes, really. And according to Toby Ord in The Precipice (page 65):
So in that case, technical work on understanding the problem was communicated to politicians (this communication being a non-technical intervention), and helped make the potential harms clearer to politicians, which helped lead to a political (partial) solution.
Basically, I think that technical and non-technical interventions are often intertwined or support each other, and that we should see current levels of recognition that AI risk is a big deal as something we can and should intervene to change, not as something fixed.