No, I think the same mechanism of action is still pretty plausible, even in 2021 (attracting more researchers and encouraging more effort to go into blindly-scaling-type research), so I think additional research here could have similar effects. As Gwern has written about extensively, for some reason the vast majority of AI companies are still not taking the scaling hypothesis seriously, so there is lots of room for more AI companies going in on it.
I also think there is a broader reference class of “having important ideas about how to build AGI” (of which the scaling hypothesis is one), that due to our proximity to top AI labs, does seem like it could have a decently sized effect.
As in my comment, I think saying “Timelines are short because the path to AGI is (blah blah)” is potentially problematic in a way that saying “Timelines are short” is not problematic. In particular, it’s especially problematic (1) If “(blah blah)” an obscure line of research, or (2) if “(blah blah)” is a well-known but not widely-accepted line of research (e.g. scaling hypothesis) AND the post includes new concrete evidence or new good arguments in favor of it.
If neither of those is applicable, then I want to say there’s really no problem. Like, if some AI Company Leader is not betting on the scaling hypothesis, not after GPT-2, not after GPT-3, not after everything that Gwern and OpenAI etc. have said about the topic … well, I have a hard time imagining that yet another LW post endorsing the scaling hypothesis would be what tips the balance for them.
I have updated over the years on how many important people in AI read and follow LessWrong and the associated meme-space. I agree marginal discussion does not make a big difference. I also think overall all discussion still probably didn’t make enough of a difference to make it net-negative, but it was substantial enough to cause me to think for quite a while on whether it was worth it overall.
I agree with you that the future costs seem marginally lower, but not low enough to make me not think hard and want to encourage others to think hard about the tradeoff. My estimate of the tradeoff came out on the net-positive side, but I wouldn’t think it would be crazy for someone’s tradeoff to come out on the net-negative side.
No, I think the same mechanism of action is still pretty plausible, even in 2021 (attracting more researchers and encouraging more effort to go into blindly-scaling-type research), so I think additional research here could have similar effects. As Gwern has written about extensively, for some reason the vast majority of AI companies are still not taking the scaling hypothesis seriously, so there is lots of room for more AI companies going in on it.
I also think there is a broader reference class of “having important ideas about how to build AGI” (of which the scaling hypothesis is one), that due to our proximity to top AI labs, does seem like it could have a decently sized effect.
As in my comment, I think saying “Timelines are short because the path to AGI is (blah blah)” is potentially problematic in a way that saying “Timelines are short” is not problematic. In particular, it’s especially problematic (1) If “(blah blah)” an obscure line of research, or (2) if “(blah blah)” is a well-known but not widely-accepted line of research (e.g. scaling hypothesis) AND the post includes new concrete evidence or new good arguments in favor of it.
If neither of those is applicable, then I want to say there’s really no problem. Like, if some AI Company Leader is not betting on the scaling hypothesis, not after GPT-2, not after GPT-3, not after everything that Gwern and OpenAI etc. have said about the topic … well, I have a hard time imagining that yet another LW post endorsing the scaling hypothesis would be what tips the balance for them.
I have updated over the years on how many important people in AI read and follow LessWrong and the associated meme-space. I agree marginal discussion does not make a big difference. I also think overall all discussion still probably didn’t make enough of a difference to make it net-negative, but it was substantial enough to cause me to think for quite a while on whether it was worth it overall.
I agree with you that the future costs seem marginally lower, but not low enough to make me not think hard and want to encourage others to think hard about the tradeoff. My estimate of the tradeoff came out on the net-positive side, but I wouldn’t think it would be crazy for someone’s tradeoff to come out on the net-negative side.