Sadly I could only create questions between 1-99 for some reason, I guess we should interpret 1% to mean 1% or less (including negative).
What makes you think more money would be net negative?
Do you think that it would also be negative if you had 100% of how the money was spent, or would it only apply if other AI Alignment researchers were responsible for the strategy to donate?
I think more money spent right now, even with the best of intentions, is likely to increase capabilities much faster than it reduces risk. I think OpenAI and consequent capability races are turning out to be an example of this.
There are hypothetical worlds where spending an extra ten billion (or a trillion) dollars on AI research with good intentions doesn’t do this, but I don’t think they’re likely to be our world. I don’t think that directing who gets the money is likely to prevent it, without pretty major non-monetary controls in addition.
I do agree that OpenAI is an example of good intentions going wrong, however I think we could learn from that and top researchers would be vary of such risks.
Nevertheless I do think your concerns are valid and is important not to dismiss.
Sadly I could only create questions between 1-99 for some reason, I guess we should interpret 1% to mean 1% or less (including negative).
What makes you think more money would be net negative?
Do you think that it would also be negative if you had 100% of how the money was spent, or would it only apply if other AI Alignment researchers were responsible for the strategy to donate?
I think more money spent right now, even with the best of intentions, is likely to increase capabilities much faster than it reduces risk. I think OpenAI and consequent capability races are turning out to be an example of this.
There are hypothetical worlds where spending an extra ten billion (or a trillion) dollars on AI research with good intentions doesn’t do this, but I don’t think they’re likely to be our world. I don’t think that directing who gets the money is likely to prevent it, without pretty major non-monetary controls in addition.
I do agree that OpenAI is an example of good intentions going wrong, however I think we could learn from that and top researchers would be vary of such risks.
Nevertheless I do think your concerns are valid and is important not to dismiss.