I certainly agree we should not content ourselves with an AI ban in lieu of technical progress
Why not? An AI ban isn’t politically possible, but if it was enacted and enforced, I’d expect it to be effective at preventing risks from unaligned AI.
I’ve heard before that an argument against banning AI research (even if you could do such a thing) is that hardware will continue to improve. This is bad because it enables less technically abled parties to weild supercomputer-level AI developments. It’s better that a single company stays ahead in the race than the remote possibility that someone can create a seed-AI in their basement.
I argue this is not enforceable in any meaningful sense. Returning to the nuclear weapons example, there are large industrial facilities and logistical footprints which are required. These can be tracked and targeted for enforcement. By contrast, computers and mathematics are cheap, ubiquitous, and you cannot have a modern civilization without them. As secret projects go, AI would be trivial to conceal. The best we could do is enforce a publishing ban—but stopping the flow of any kind of information is a very expensive task, and we could not confidently say the risk is mitigated, only delayed. Further, voluntary compliance would only mean ceding the first-mover advantage to institutions which are already less concerned with issues like ethics.
I would expect attempts to ban AI research to make it marginally less likely to appear, and much less likely to be aligned if it does. Not a net gain.
Why not? An AI ban isn’t politically possible, but if it was enacted and enforced, I’d expect it to be effective at preventing risks from unaligned AI.
I’ve heard before that an argument against banning AI research (even if you could do such a thing) is that hardware will continue to improve. This is bad because it enables less technically abled parties to weild supercomputer-level AI developments. It’s better that a single company stays ahead in the race than the remote possibility that someone can create a seed-AI in their basement.
I argue this is not enforceable in any meaningful sense. Returning to the nuclear weapons example, there are large industrial facilities and logistical footprints which are required. These can be tracked and targeted for enforcement. By contrast, computers and mathematics are cheap, ubiquitous, and you cannot have a modern civilization without them. As secret projects go, AI would be trivial to conceal. The best we could do is enforce a publishing ban—but stopping the flow of any kind of information is a very expensive task, and we could not confidently say the risk is mitigated, only delayed. Further, voluntary compliance would only mean ceding the first-mover advantage to institutions which are already less concerned with issues like ethics.
I would expect attempts to ban AI research to make it marginally less likely to appear, and much less likely to be aligned if it does. Not a net gain.