It’s taking a massive massive failure and trying to find exactly the right abstract gloss to put on it that makes it sound like exactly the right perfect thing will be done next time.
I feel like Ngo didn’t really respond to this?
Like, later he says:
Right, I’m not endorsing this as my mainline prediction about what happens. Mainly what I’m doing here is highlighting that your view seems like one which cherrypicks pessimistic interpretations.
But… Richard, are you endorsing it as ‘at all in line with the evidence?’ Like, when I imagine living in that world, it doesn’t have gain-of-function research, which our world clearly does. [And somehow this seems connected to Eliezer’s earlier complaints, where it’s not obvious to me that when you wrote the explanation, your next step was to figure out what that would actually imply and check if it were true or not.]
I think we live in a world where there are very strong forces opposed to technological progress, which actively impede a lot of impactful work, including technologies which have the potential to be very economically and strategically important (e.g. nuclear power, vaccines, genetic engineering, geoengineering).
This observation doesn’t lead me to a strong prediction that all such technologies will be banned; nor even that the most costly technologies will be banned—if the forces opposed to technological progress were even approximately rational, then banning gain of function research would be one of their main priorities (although I note that they did manage to ban it, the ban just didn’t stick).
But when Eliezer points to covid as an example of generalised government failure, and I point to covid as also being an example of the specific phenomenon of people being very wary of new technology, I don’t think that my gloss is clearly absurd. I’m open to arguments that say that serious opposition to AI progress won’t be an important factor in how the future plays out; and I’m also open to arguments that covid doesn’t provide much evidence that there will be serious opposition to AI progress. But I do think that those arguments need to be made.
I feel like Ngo didn’t really respond to this?
Like, later he says:
But… Richard, are you endorsing it as ‘at all in line with the evidence?’ Like, when I imagine living in that world, it doesn’t have gain-of-function research, which our world clearly does. [And somehow this seems connected to Eliezer’s earlier complaints, where it’s not obvious to me that when you wrote the explanation, your next step was to figure out what that would actually imply and check if it were true or not.]
I think we live in a world where there are very strong forces opposed to technological progress, which actively impede a lot of impactful work, including technologies which have the potential to be very economically and strategically important (e.g. nuclear power, vaccines, genetic engineering, geoengineering).
This observation doesn’t lead me to a strong prediction that all such technologies will be banned; nor even that the most costly technologies will be banned—if the forces opposed to technological progress were even approximately rational, then banning gain of function research would be one of their main priorities (although I note that they did manage to ban it, the ban just didn’t stick).
But when Eliezer points to covid as an example of generalised government failure, and I point to covid as also being an example of the specific phenomenon of people being very wary of new technology, I don’t think that my gloss is clearly absurd. I’m open to arguments that say that serious opposition to AI progress won’t be an important factor in how the future plays out; and I’m also open to arguments that covid doesn’t provide much evidence that there will be serious opposition to AI progress. But I do think that those arguments need to be made.