1. Unfortunately, I think it’s harder to convert government funding into novel research than one might expect. I think there are a limited number of competent thinkers who are sufficiently up to speed on the problem to contribute within the short remaining time before catastrophe. I do agree that more government funding would help a lot, and that I’d personally love to be given some funding! I do also agree that it would help a huge amount in the long term (> 20 years). In the short term (< 3 years) however, I don’t think that even a trillion dollars of government funding would result in AI safety R&D progress sufficient to exceed all previous human AI safety R&D. I also think that there’s decreasing returns to funding, as the available researchers get sufficiently motivated to switch to the topic, and have sufficient resources to supply their labs with compute and assistants. I think that in the current world you probably don’t get much for your 11th trillion. So yeah, I’d definitely endorse spending $10 trillion on AI safety R&D (although I do think there are ways this could be implemented which would be unhelpful or even counter-productive).
2. I think that exceeding previous AI safety R&D is very different from obsoleting it. Building on a foundation and reaching greater heights doesn’t make the foundation worthless. If you do think that the foundation is worthless, I’d be curious to hear your arguments, but that seems like a different train of thought entirely.
3. I think that there will be a critical period where there is sufficiently strong AI that augmented/automated AI safety R&D will be able to rapidly eclipse the existing body of work. I don’t think we are there yet, and I wouldn’t choose to accelerate AI capabilities timelines further to get us there sooner. I do think that having AI safety labs well-supplied with funding and compute is important, but I don’t think that any amount of money or compute currently buys the not-yet-existing AI research assistant level AI which could result in the start of the critical automated-research period. I worry that this level of AI capability will also enable recursive self-improvement of AI capabilities research, so I think that’s going to be a very dangerous time indeed.
1. Unfortunately, I think it’s harder to convert government funding into novel research than one might expect. I think there are a limited number of competent thinkers who are sufficiently up to speed on the problem to contribute within the short remaining time before catastrophe. I do agree that more government funding would help a lot, and that I’d personally love to be given some funding! I do also agree that it would help a huge amount in the long term (> 20 years). In the short term (< 3 years) however, I don’t think that even a trillion dollars of government funding would result in AI safety R&D progress sufficient to exceed all previous human AI safety R&D. I also think that there’s decreasing returns to funding, as the available researchers get sufficiently motivated to switch to the topic, and have sufficient resources to supply their labs with compute and assistants. I think that in the current world you probably don’t get much for your 11th trillion. So yeah, I’d definitely endorse spending $10 trillion on AI safety R&D (although I do think there are ways this could be implemented which would be unhelpful or even counter-productive).
2. I think that exceeding previous AI safety R&D is very different from obsoleting it. Building on a foundation and reaching greater heights doesn’t make the foundation worthless. If you do think that the foundation is worthless, I’d be curious to hear your arguments, but that seems like a different train of thought entirely.
3. I think that there will be a critical period where there is sufficiently strong AI that augmented/automated AI safety R&D will be able to rapidly eclipse the existing body of work. I don’t think we are there yet, and I wouldn’t choose to accelerate AI capabilities timelines further to get us there sooner. I do think that having AI safety labs well-supplied with funding and compute is important, but I don’t think that any amount of money or compute currently buys the not-yet-existing AI research assistant level AI which could result in the start of the critical automated-research period. I worry that this level of AI capability will also enable recursive self-improvement of AI capabilities research, so I think that’s going to be a very dangerous time indeed.