I think more money spent right now, even with the best of intentions, is likely to increase capabilities much faster than it reduces risk. I think OpenAI and consequent capability races are turning out to be an example of this.
There are hypothetical worlds where spending an extra ten billion (or a trillion) dollars on AI research with good intentions doesn’t do this, but I don’t think they’re likely to be our world. I don’t think that directing who gets the money is likely to prevent it, without pretty major non-monetary controls in addition.
I do agree that OpenAI is an example of good intentions going wrong, however I think we could learn from that and top researchers would be vary of such risks.
Nevertheless I do think your concerns are valid and is important not to dismiss.
I think more money spent right now, even with the best of intentions, is likely to increase capabilities much faster than it reduces risk. I think OpenAI and consequent capability races are turning out to be an example of this.
There are hypothetical worlds where spending an extra ten billion (or a trillion) dollars on AI research with good intentions doesn’t do this, but I don’t think they’re likely to be our world. I don’t think that directing who gets the money is likely to prevent it, without pretty major non-monetary controls in addition.
I do agree that OpenAI is an example of good intentions going wrong, however I think we could learn from that and top researchers would be vary of such risks.
Nevertheless I do think your concerns are valid and is important not to dismiss.