I disagree, but I’m not sure how relevant my opinion is, since I’m far less worried about “AGI ruin” to begin with than the median LWer. That said, here’s my thinking:
First, there’s no universally agreed-upon line between “discussing whether the analysis has merits” and “giving the capabilities people free ideas.” Where a person draws this line depends on how obvious they think the ideas are, or how obvious they think they will be to the capabilities people.
Second, there are costs to not talking about things. It’s useful for alignment research to have a correct sense of where capabilities research is headed, and where it isn’t headed. If alignment researchers talk more to one another than to “capabilities people” (true IME), and they practice self-censorship like this, they’ll end up with some importantly wrong beliefs.
Also, and perhaps worse—if alignment researchers never voice their own secret capabilities ideas in fora where “capabilities people” can hear, then they’ll never receive feedback about these ideas from the people who know what it would be like to apply them in the real world. Alignment researchers may end up with private stockpiles of “secret tricks” in their heads which are actually either misguided or obvious, and this disconnect will be a further source of false beliefs.
So, to motivate your concern, we need to imagine a world where
commenters on LW are proficient enough at capabilities research that they can make non-obvious advances in blog comments, in a way that “moves the needle” of capabilities research, and
this is worth the false-belief downsides of self-censorship (say, because commenters on LW are sufficiently informed about capabilities research that they will not form false beliefs anyway)
This seems far from the real situation, IMO. Based on what I see, “alignment researchers don’t understand capabilities research well enough” seems like far more of a live threat to alignment than “alignment researchers are too good at capabilities research, and keep accidentally pushing the field forward in blog comments.” (At least using alignment-interested folks on LW as a proxy for “alignment researchers,” and that’s who we’re deciding norms for anyway.)
Like, take this post as an example. I was motivated to write this post because I felt like the Chinchilla paper wasn’t understood well on LW.
It seems like people have heard of Chinchilla, but mentally categorized it as simple “sudden jump” in overall capabilities that otherwise left everything the same, rather than as a result that demands reconsideration of basic background assumptions. I still saw people treating LM param counts like they were interchangeable with LM quality/scariness (and with LM training compute). People would ask things like “what would it cost (in compute spending) to train a 10T parameter Chinchilla?”, which is a bizarre way to frame things if you grok what Chinchilla is.
I don’t think I’m presenting some novel insight in this post. Mostly, I’m just reiterating what the papers say. I expect any serious capabilities researcher in this area to have read these papers and internalized them at the same depth I have (or more). But people on LW hadn’t done that, and more generally people “interested in AI” who don’t closely read all these papers hadn’t done that. So I wrote an explainer.
The LW reaction to new ML results typically looks this way to me. Obviously “LW” is not a monolith and there are plenty of people here who do seriously internalize papers like this. But the “general trend of the conversation,” insofar as there is such a thing, repeatedly strikes me as over-focused on concrete impressive-sounding results (esp. those that sound impressive out of context), and under-focused on more theoretical advances that sound boring on paper but change the whole rules of the game. The conversation “keeps up” with ML in the sense that it takes note of the decontextualized top-line results in new papers, but it often lacks a mechanistic appreciation of how it all fits together.
Anyway, this strikes me as a much bigger deal for alignment quantitatively at the current frontier than the risk of accidentally handing over free advances to the capabilities people.
People would ask things like “what would it cost (in compute spending) to train a 10T parameter Chinchilla?”, which is a bizarre way to frame things if you grok what Chinchilla is.
That wasn’t an alignment researcher, though (was it? I thought Tomás was just an interested commenter), and it’s a reasonable question to ask when no one’s run the numbers, and when you get an answer like ‘well, it’d take something like >5000x more compute than PaLM’, that’s a lesson learned.
At least among the people I’ve talked to, it seems reasonably well understood that Chinchilla had major implications, meant an immediate capabilities jump and cheaper deployment, and even more importantly meant parameter scaling was dead, and data and then compute were the bottleneck (which is also what I’ve said bluntly in my earlier comments), and this was why Chinchilla was more important than more splashy stuff like PaLM*. (One capability researcher, incidentally, wasn’t revising plans but that’s because he wasn’t convinced Chinchilla was right in the first place! AFAIK, there has been no dramatic followup to Chinchilla on part with GPT-3 following up Kaplan et al, and in fact, no one has replicated Chinchilla at all, much less run a full scaling law sweep and inferred similar scaling laws, so there is still some doubt there about how real Chinchilla is or how accurate or generalizable its scaling laws are—quite aside from the usual issues like hilariously vague descriptions of datasets.)
I also agree with Tom that if one had thoughts about Chinchilla and data sampling and brand new scaling dynamics catapulting immediately into arms races, it is increasingly approaching the point where a reasonable person might decide to move discussions to more private channels, and for that reason the public discussions of Chinchilla might be very basic and of the ‘could we train a 10t parameter Chinchilla’ sort.
* PaLM and DALL-E 2 etc helping down out Chinchilla is an example of what I’ve referred to about how the boom-bust clustering of DL research publications can be quite harmful to discussions.
What are the public domain internet places where one can learn more about capabilities, or see discussions of capabilities frameworks? Here’s what I’m aware of:
LessWrong
Twitter (but specifically who idk, I avoid Twitter for mental health reasons.)
ArXiv (comp-sci)
YouTube: MLST, Two Minute Papers, Yannic Kilcher, some conference talks
A little bit on reddit (r/mlscaling, u/gwern)
All-in-all, there’s not that much heavy discussion online. I’ve been told that these discussions really happen in-person, in the Bay Area and in DeepMind London offices. LessWrong actually ends up having the best discussion (in the capabilities space.)
(Since someone is likely to complain about seeking out more capabilities information, well yes it’s risky, but I’m more in agreement with nostalgebraist that this level of discussion is probably harmless, and that it’s better we keep an accurate and up-to-date understanding of the situation and technology.)
I disagree, but I’m not sure how relevant my opinion is, since I’m far less worried about “AGI ruin” to begin with than the median LWer. That said, here’s my thinking:
First, there’s no universally agreed-upon line between “discussing whether the analysis has merits” and “giving the capabilities people free ideas.” Where a person draws this line depends on how obvious they think the ideas are, or how obvious they think they will be to the capabilities people.
Second, there are costs to not talking about things. It’s useful for alignment research to have a correct sense of where capabilities research is headed, and where it isn’t headed. If alignment researchers talk more to one another than to “capabilities people” (true IME), and they practice self-censorship like this, they’ll end up with some importantly wrong beliefs.
Also, and perhaps worse—if alignment researchers never voice their own secret capabilities ideas in fora where “capabilities people” can hear, then they’ll never receive feedback about these ideas from the people who know what it would be like to apply them in the real world. Alignment researchers may end up with private stockpiles of “secret tricks” in their heads which are actually either misguided or obvious, and this disconnect will be a further source of false beliefs.
So, to motivate your concern, we need to imagine a world where
commenters on LW are proficient enough at capabilities research that they can make non-obvious advances in blog comments, in a way that “moves the needle” of capabilities research, and
this is worth the false-belief downsides of self-censorship (say, because commenters on LW are sufficiently informed about capabilities research that they will not form false beliefs anyway)
This seems far from the real situation, IMO. Based on what I see, “alignment researchers don’t understand capabilities research well enough” seems like far more of a live threat to alignment than “alignment researchers are too good at capabilities research, and keep accidentally pushing the field forward in blog comments.” (At least using alignment-interested folks on LW as a proxy for “alignment researchers,” and that’s who we’re deciding norms for anyway.)
Like, take this post as an example. I was motivated to write this post because I felt like the Chinchilla paper wasn’t understood well on LW.
It seems like people have heard of Chinchilla, but mentally categorized it as simple “sudden jump” in overall capabilities that otherwise left everything the same, rather than as a result that demands reconsideration of basic background assumptions. I still saw people treating LM param counts like they were interchangeable with LM quality/scariness (and with LM training compute). People would ask things like “what would it cost (in compute spending) to train a 10T parameter Chinchilla?”, which is a bizarre way to frame things if you grok what Chinchilla is.
I don’t think I’m presenting some novel insight in this post. Mostly, I’m just reiterating what the papers say. I expect any serious capabilities researcher in this area to have read these papers and internalized them at the same depth I have (or more). But people on LW hadn’t done that, and more generally people “interested in AI” who don’t closely read all these papers hadn’t done that. So I wrote an explainer.
The LW reaction to new ML results typically looks this way to me. Obviously “LW” is not a monolith and there are plenty of people here who do seriously internalize papers like this. But the “general trend of the conversation,” insofar as there is such a thing, repeatedly strikes me as over-focused on concrete impressive-sounding results (esp. those that sound impressive out of context), and under-focused on more theoretical advances that sound boring on paper but change the whole rules of the game. The conversation “keeps up” with ML in the sense that it takes note of the decontextualized top-line results in new papers, but it often lacks a mechanistic appreciation of how it all fits together.
Anyway, this strikes me as a much bigger deal for alignment quantitatively at the current frontier than the risk of accidentally handing over free advances to the capabilities people.
That wasn’t an alignment researcher, though (was it? I thought Tomás was just an interested commenter), and it’s a reasonable question to ask when no one’s run the numbers, and when you get an answer like ‘well, it’d take something like >5000x more compute than PaLM’, that’s a lesson learned.
At least among the people I’ve talked to, it seems reasonably well understood that Chinchilla had major implications, meant an immediate capabilities jump and cheaper deployment, and even more importantly meant parameter scaling was dead, and data and then compute were the bottleneck (which is also what I’ve said bluntly in my earlier comments), and this was why Chinchilla was more important than more splashy stuff like PaLM*. (One capability researcher, incidentally, wasn’t revising plans but that’s because he wasn’t convinced Chinchilla was right in the first place! AFAIK, there has been no dramatic followup to Chinchilla on part with GPT-3 following up Kaplan et al, and in fact, no one has replicated Chinchilla at all, much less run a full scaling law sweep and inferred similar scaling laws, so there is still some doubt there about how real Chinchilla is or how accurate or generalizable its scaling laws are—quite aside from the usual issues like hilariously vague descriptions of datasets.)
I also agree with Tom that if one had thoughts about Chinchilla and data sampling and brand new scaling dynamics catapulting immediately into arms races, it is increasingly approaching the point where a reasonable person might decide to move discussions to more private channels, and for that reason the public discussions of Chinchilla might be very basic and of the ‘could we train a 10t parameter Chinchilla’ sort.
* PaLM and DALL-E 2 etc helping down out Chinchilla is an example of what I’ve referred to about how the boom-bust clustering of DL research publications can be quite harmful to discussions.
Yep. Just an interested layman.
What are the public domain internet places where one can learn more about capabilities, or see discussions of capabilities frameworks? Here’s what I’m aware of:
LessWrong
Twitter (but specifically who idk, I avoid Twitter for mental health reasons.)
ArXiv (comp-sci)
YouTube: MLST, Two Minute Papers, Yannic Kilcher, some conference talks
A little bit on reddit (r/mlscaling, u/gwern)
All-in-all, there’s not that much heavy discussion online. I’ve been told that these discussions really happen in-person, in the Bay Area and in DeepMind London offices. LessWrong actually ends up having the best discussion (in the capabilities space.)
(Since someone is likely to complain about seeking out more capabilities information, well yes it’s risky, but I’m more in agreement with nostalgebraist that this level of discussion is probably harmless, and that it’s better we keep an accurate and up-to-date understanding of the situation and technology.)
Mostly Discord servers in my experience: EleutherAI is a big well known one but there are others with high concentrations of top ML researchers.