A point that K. Eric Drexler makes about nanotechnology research also applies to AI research. If a capability can be gained, eventually it will be gained and we can therefore not base humanity’s survival on AI never happening. Doing so is denying the inevitable. Instead, we can only hope to manage it as well as possible. Suppose we took the view that ethical people would not create AI. By definition, the only people creating it would be unethical people, who would then control what happened next—so by opting out, all the ethical people would be doing would be handing power over to unethical people. I think this makes the position of ethical withdrawal ethically dubious.
-Paul Almond