I’d like to propose not talking publicly about ways to “fix” this issue. Insofar these results spell trouble for scaling up LLMs, this is a good thing! Infohazard (meta-)discussions are thorny by their very nature and I don’t want to discourage discussions around these results in general, e.g. how to interpret them or whether the analysis has merits.
I disagree, but I’m not sure how relevant my opinion is, since I’m far less worried about “AGI ruin” to begin with than the median LWer. That said, here’s my thinking:
First, there’s no universally agreed-upon line between “discussing whether the analysis has merits” and “giving the capabilities people free ideas.” Where a person draws this line depends on how obvious they think the ideas are, or how obvious they think they will be to the capabilities people.
Second, there are costs to not talking about things. It’s useful for alignment research to have a correct sense of where capabilities research is headed, and where it isn’t headed. If alignment researchers talk more to one another than to “capabilities people” (true IME), and they practice self-censorship like this, they’ll end up with some importantly wrong beliefs.
Also, and perhaps worse—if alignment researchers never voice their own secret capabilities ideas in fora where “capabilities people” can hear, then they’ll never receive feedback about these ideas from the people who know what it would be like to apply them in the real world. Alignment researchers may end up with private stockpiles of “secret tricks” in their heads which are actually either misguided or obvious, and this disconnect will be a further source of false beliefs.
So, to motivate your concern, we need to imagine a world where
commenters on LW are proficient enough at capabilities research that they can make non-obvious advances in blog comments, in a way that “moves the needle” of capabilities research, and
this is worth the false-belief downsides of self-censorship (say, because commenters on LW are sufficiently informed about capabilities research that they will not form false beliefs anyway)
This seems far from the real situation, IMO. Based on what I see, “alignment researchers don’t understand capabilities research well enough” seems like far more of a live threat to alignment than “alignment researchers are too good at capabilities research, and keep accidentally pushing the field forward in blog comments.” (At least using alignment-interested folks on LW as a proxy for “alignment researchers,” and that’s who we’re deciding norms for anyway.)
Like, take this post as an example. I was motivated to write this post because I felt like the Chinchilla paper wasn’t understood well on LW.
It seems like people have heard of Chinchilla, but mentally categorized it as simple “sudden jump” in overall capabilities that otherwise left everything the same, rather than as a result that demands reconsideration of basic background assumptions. I still saw people treating LM param counts like they were interchangeable with LM quality/scariness (and with LM training compute). People would ask things like “what would it cost (in compute spending) to train a 10T parameter Chinchilla?”, which is a bizarre way to frame things if you grok what Chinchilla is.
I don’t think I’m presenting some novel insight in this post. Mostly, I’m just reiterating what the papers say. I expect any serious capabilities researcher in this area to have read these papers and internalized them at the same depth I have (or more). But people on LW hadn’t done that, and more generally people “interested in AI” who don’t closely read all these papers hadn’t done that. So I wrote an explainer.
The LW reaction to new ML results typically looks this way to me. Obviously “LW” is not a monolith and there are plenty of people here who do seriously internalize papers like this. But the “general trend of the conversation,” insofar as there is such a thing, repeatedly strikes me as over-focused on concrete impressive-sounding results (esp. those that sound impressive out of context), and under-focused on more theoretical advances that sound boring on paper but change the whole rules of the game. The conversation “keeps up” with ML in the sense that it takes note of the decontextualized top-line results in new papers, but it often lacks a mechanistic appreciation of how it all fits together.
Anyway, this strikes me as a much bigger deal for alignment quantitatively at the current frontier than the risk of accidentally handing over free advances to the capabilities people.
People would ask things like “what would it cost (in compute spending) to train a 10T parameter Chinchilla?”, which is a bizarre way to frame things if you grok what Chinchilla is.
That wasn’t an alignment researcher, though (was it? I thought Tomás was just an interested commenter), and it’s a reasonable question to ask when no one’s run the numbers, and when you get an answer like ‘well, it’d take something like >5000x more compute than PaLM’, that’s a lesson learned.
At least among the people I’ve talked to, it seems reasonably well understood that Chinchilla had major implications, meant an immediate capabilities jump and cheaper deployment, and even more importantly meant parameter scaling was dead, and data and then compute were the bottleneck (which is also what I’ve said bluntly in my earlier comments), and this was why Chinchilla was more important than more splashy stuff like PaLM*. (One capability researcher, incidentally, wasn’t revising plans but that’s because he wasn’t convinced Chinchilla was right in the first place! AFAIK, there has been no dramatic followup to Chinchilla on part with GPT-3 following up Kaplan et al, and in fact, no one has replicated Chinchilla at all, much less run a full scaling law sweep and inferred similar scaling laws, so there is still some doubt there about how real Chinchilla is or how accurate or generalizable its scaling laws are—quite aside from the usual issues like hilariously vague descriptions of datasets.)
I also agree with Tom that if one had thoughts about Chinchilla and data sampling and brand new scaling dynamics catapulting immediately into arms races, it is increasingly approaching the point where a reasonable person might decide to move discussions to more private channels, and for that reason the public discussions of Chinchilla might be very basic and of the ‘could we train a 10t parameter Chinchilla’ sort.
* PaLM and DALL-E 2 etc helping down out Chinchilla is an example of what I’ve referred to about how the boom-bust clustering of DL research publications can be quite harmful to discussions.
What are the public domain internet places where one can learn more about capabilities, or see discussions of capabilities frameworks? Here’s what I’m aware of:
LessWrong
Twitter (but specifically who idk, I avoid Twitter for mental health reasons.)
ArXiv (comp-sci)
YouTube: MLST, Two Minute Papers, Yannic Kilcher, some conference talks
A little bit on reddit (r/mlscaling, u/gwern)
All-in-all, there’s not that much heavy discussion online. I’ve been told that these discussions really happen in-person, in the Bay Area and in DeepMind London offices. LessWrong actually ends up having the best discussion (in the capabilities space.)
(Since someone is likely to complain about seeking out more capabilities information, well yes it’s risky, but I’m more in agreement with nostalgebraist that this level of discussion is probably harmless, and that it’s better we keep an accurate and up-to-date understanding of the situation and technology.)
I upvoted since I think discussing what should or should not be discussed is important, but I tentatively disagree:
It seems unlikely that comments on lesswrong speed up capabilities research since the thoughts are probably just a subset of what the scaling teams know, and lesswrong is likely not their highest signal information source anyway.
Even from a safety perspective, it seems important to know which problems in capabilities research can be alleviated, since this will give a clearer picture of timelines.
I think we should have strong reasons before discouraging topics of discussion since lesswrong is not only a place for instrumental rationality but also epistemic rationality—maybe even more so.
That said, lesswrong is de facto one of the best places to discuss AI safety since the alignment forum is invite-only. thus, it seems that there should be some discussion around which tradeoffs to make on LW between “figuring out what’s true” and “not spreading info hazards”.
I disagree with the reasoning in this reply to Tom (and in nostalgebraist’s reply). If stuff like this is net-positive to post about on LW, the chain of reasoning to arrive at that conclusion seems to me like it has to look different from the reasoning in these comments. E.g.:
“It seems unlikely that comments on lesswrong speed up capabilities research”—If “unlikely” here meant “only 40% likely”, then it would obviously be a bad idea to post a capabilities insight. The degree of unlikeliness obviously matters, and it has to be weighed against the expected benefit of sharing the insight.
At the policy level, “How does this weigh against the expected benefits?” has to take into account that the quality and rarity of LWers’ insights is likely to vary a lot by individual and across time; and it has to take into account that the risk level of LW posts is very correlated with the benefit level. In the worlds where there’s not much future risk of a LWer someday posting a dangerous capabilities insight, there’s also less future benefit to LW posts, since we’re probably not generating many useful ideas in general (especially about AGI and AGI alignment).
“it seems important to know which problems in capabilities research can be alleviate”—What specific safety progress does this enable? (Maybe there’s something, but ‘it seems safety-relevant because it’s a fact about ML’ seems to prove too much. What’s the actual implicit path by which humanity ends up safely navigating the AGI transition?)
‘We should require a high bar before we’re willing to not-post potentially-world-destroying information to LW, because LW has a strong commitment to epistemic rationality’ seems like an obviously terrible argument to me. People should not post stuff to the public Internet that destroys the world just because the place they’re posting is a website that cares about Bayesianism and belief accuracy.
(Note that this is a high-level point I’m making about the kind of arguments being made here, not about the object-level question.)
In the worlds where there’s not much future risk of a LWer someday posting a dangerous capabilities insight, there’s also less future benefit to LW posts, since we’re probably not generating many useful ideas in general (especially about AGI and AGI alignment).
This seems correct, though it’s still valuable to flesh out that it seems possible to have LW posts that are helpful for alignment but not for capabilities: namely, such posts that summarize insights from capabilities research that are known to ~all capabilities people while known to few alignment people.
The main reason I shifted more to your viewpoint now is that capabilities insights might influence people who do not yet know a lot about capabilities to work on that in the future, instead of working on alignment. Therefore, I’m also not sure if Marius’ heuristic “Has company-X-who-cares-mostly-about-capabilities likely thought about this already?” for deciding whether something is infohazardy is safe.
‘We should require a high bar before we’re willing to not-post potentially-world-destroying information to LW, because LW has a strong commitment to epistemic rationality’ seems like an obviously terrible argument to me. People should not post stuff to the public Internet that destroys the world just because the place they’re posting is a website that cares about Bayesianism and belief accuracy.
Yes, that seems correct (though I’m a bit unhappy about you bluntly straw-manning my position). I think after reflection I would phrase my point as follows: ”There is a conflict between Lesswrongs commitment to epistemic rationality on the one hand, and the commitment to restrict info hazards on the other hand. Lesswrong’s commitment to epistemic rationality exists for good reasons, and should not be given up lightly. Therefore, whenever we restrict discussion and information about certain topics, we should have thought about this with great care.”
I don’t yet have a fleshed-out view on this, but I did move a bit in Tom’s direction.
We should require a high bar before we’re willing to not-post potentially-world-destroying information to LW, because LW has a strong commitment to epistemic rationality’ seems like an obviously terrible argument to me.
I think that argument is good if you expand out its reasoning. The reason we have a strong commitment to epistemic rationality is because learning and teaching true things is almost always very good. You need to establish a fair chunk of probable bad to outweigh it.
Thanks for your reply! I think I basically agree with all of your points. I feel a lot of frustration around the fact that we don’t seem to have adequate infohazard policies to address this. It seems like a fundamental trade-off between security and openness/earnestness of discussion does exist though.
It could be the case that this community is not the correct place to enforce this rules, as there does still exist a substantial gap between “this thing could work” and “we have a working system”. This is doubly true in DL where implementation details matter a great deal.
My tentative heuristic for whether you should publish a post that is potentially infohazardy is “Has company-X-who-cares-mostly-about-capabilities likely thought about this already?”. It’s obviously non-trivial to answer that question but I’m pretty sure most companies who build LLMs have looked at Chinchilla and come to similar conclusions as this post. In case you’re unsure, write up the post in a google doc and ask someone who has thought more about infohazards whether they would publish it or not.
Also, I think Leon underestimates how fast a post can spread even if it is just intended for an alignment audience on LW.
I’d like to propose not talking publicly about ways to “fix” this issue. Insofar these results spell trouble for scaling up LLMs, this is a good thing!
Infohazard (meta-)discussions are thorny by their very nature and I don’t want to discourage discussions around these results in general, e.g. how to interpret them or whether the analysis has merits.
I disagree, but I’m not sure how relevant my opinion is, since I’m far less worried about “AGI ruin” to begin with than the median LWer. That said, here’s my thinking:
First, there’s no universally agreed-upon line between “discussing whether the analysis has merits” and “giving the capabilities people free ideas.” Where a person draws this line depends on how obvious they think the ideas are, or how obvious they think they will be to the capabilities people.
Second, there are costs to not talking about things. It’s useful for alignment research to have a correct sense of where capabilities research is headed, and where it isn’t headed. If alignment researchers talk more to one another than to “capabilities people” (true IME), and they practice self-censorship like this, they’ll end up with some importantly wrong beliefs.
Also, and perhaps worse—if alignment researchers never voice their own secret capabilities ideas in fora where “capabilities people” can hear, then they’ll never receive feedback about these ideas from the people who know what it would be like to apply them in the real world. Alignment researchers may end up with private stockpiles of “secret tricks” in their heads which are actually either misguided or obvious, and this disconnect will be a further source of false beliefs.
So, to motivate your concern, we need to imagine a world where
commenters on LW are proficient enough at capabilities research that they can make non-obvious advances in blog comments, in a way that “moves the needle” of capabilities research, and
this is worth the false-belief downsides of self-censorship (say, because commenters on LW are sufficiently informed about capabilities research that they will not form false beliefs anyway)
This seems far from the real situation, IMO. Based on what I see, “alignment researchers don’t understand capabilities research well enough” seems like far more of a live threat to alignment than “alignment researchers are too good at capabilities research, and keep accidentally pushing the field forward in blog comments.” (At least using alignment-interested folks on LW as a proxy for “alignment researchers,” and that’s who we’re deciding norms for anyway.)
Like, take this post as an example. I was motivated to write this post because I felt like the Chinchilla paper wasn’t understood well on LW.
It seems like people have heard of Chinchilla, but mentally categorized it as simple “sudden jump” in overall capabilities that otherwise left everything the same, rather than as a result that demands reconsideration of basic background assumptions. I still saw people treating LM param counts like they were interchangeable with LM quality/scariness (and with LM training compute). People would ask things like “what would it cost (in compute spending) to train a 10T parameter Chinchilla?”, which is a bizarre way to frame things if you grok what Chinchilla is.
I don’t think I’m presenting some novel insight in this post. Mostly, I’m just reiterating what the papers say. I expect any serious capabilities researcher in this area to have read these papers and internalized them at the same depth I have (or more). But people on LW hadn’t done that, and more generally people “interested in AI” who don’t closely read all these papers hadn’t done that. So I wrote an explainer.
The LW reaction to new ML results typically looks this way to me. Obviously “LW” is not a monolith and there are plenty of people here who do seriously internalize papers like this. But the “general trend of the conversation,” insofar as there is such a thing, repeatedly strikes me as over-focused on concrete impressive-sounding results (esp. those that sound impressive out of context), and under-focused on more theoretical advances that sound boring on paper but change the whole rules of the game. The conversation “keeps up” with ML in the sense that it takes note of the decontextualized top-line results in new papers, but it often lacks a mechanistic appreciation of how it all fits together.
Anyway, this strikes me as a much bigger deal for alignment quantitatively at the current frontier than the risk of accidentally handing over free advances to the capabilities people.
That wasn’t an alignment researcher, though (was it? I thought Tomás was just an interested commenter), and it’s a reasonable question to ask when no one’s run the numbers, and when you get an answer like ‘well, it’d take something like >5000x more compute than PaLM’, that’s a lesson learned.
At least among the people I’ve talked to, it seems reasonably well understood that Chinchilla had major implications, meant an immediate capabilities jump and cheaper deployment, and even more importantly meant parameter scaling was dead, and data and then compute were the bottleneck (which is also what I’ve said bluntly in my earlier comments), and this was why Chinchilla was more important than more splashy stuff like PaLM*. (One capability researcher, incidentally, wasn’t revising plans but that’s because he wasn’t convinced Chinchilla was right in the first place! AFAIK, there has been no dramatic followup to Chinchilla on part with GPT-3 following up Kaplan et al, and in fact, no one has replicated Chinchilla at all, much less run a full scaling law sweep and inferred similar scaling laws, so there is still some doubt there about how real Chinchilla is or how accurate or generalizable its scaling laws are—quite aside from the usual issues like hilariously vague descriptions of datasets.)
I also agree with Tom that if one had thoughts about Chinchilla and data sampling and brand new scaling dynamics catapulting immediately into arms races, it is increasingly approaching the point where a reasonable person might decide to move discussions to more private channels, and for that reason the public discussions of Chinchilla might be very basic and of the ‘could we train a 10t parameter Chinchilla’ sort.
* PaLM and DALL-E 2 etc helping down out Chinchilla is an example of what I’ve referred to about how the boom-bust clustering of DL research publications can be quite harmful to discussions.
Yep. Just an interested layman.
What are the public domain internet places where one can learn more about capabilities, or see discussions of capabilities frameworks? Here’s what I’m aware of:
LessWrong
Twitter (but specifically who idk, I avoid Twitter for mental health reasons.)
ArXiv (comp-sci)
YouTube: MLST, Two Minute Papers, Yannic Kilcher, some conference talks
A little bit on reddit (r/mlscaling, u/gwern)
All-in-all, there’s not that much heavy discussion online. I’ve been told that these discussions really happen in-person, in the Bay Area and in DeepMind London offices. LessWrong actually ends up having the best discussion (in the capabilities space.)
(Since someone is likely to complain about seeking out more capabilities information, well yes it’s risky, but I’m more in agreement with nostalgebraist that this level of discussion is probably harmless, and that it’s better we keep an accurate and up-to-date understanding of the situation and technology.)
Mostly Discord servers in my experience: EleutherAI is a big well known one but there are others with high concentrations of top ML researchers.
I upvoted since I think discussing what should or should not be discussed is important, but I tentatively disagree:
It seems unlikely that comments on lesswrong speed up capabilities research since the thoughts are probably just a subset of what the scaling teams know, and lesswrong is likely not their highest signal information source anyway.
Even from a safety perspective, it seems important to know which problems in capabilities research can be alleviated, since this will give a clearer picture of timelines.
I think we should have strong reasons before discouraging topics of discussion since lesswrong is not only a place for instrumental rationality but also epistemic rationality—maybe even more so.
That said, lesswrong is de facto one of the best places to discuss AI safety since the alignment forum is invite-only. thus, it seems that there should be some discussion around which tradeoffs to make on LW between “figuring out what’s true” and “not spreading info hazards”.
I disagree with the reasoning in this reply to Tom (and in nostalgebraist’s reply). If stuff like this is net-positive to post about on LW, the chain of reasoning to arrive at that conclusion seems to me like it has to look different from the reasoning in these comments. E.g.:
“It seems unlikely that comments on lesswrong speed up capabilities research”—If “unlikely” here meant “only 40% likely”, then it would obviously be a bad idea to post a capabilities insight. The degree of unlikeliness obviously matters, and it has to be weighed against the expected benefit of sharing the insight.
At the policy level, “How does this weigh against the expected benefits?” has to take into account that the quality and rarity of LWers’ insights is likely to vary a lot by individual and across time; and it has to take into account that the risk level of LW posts is very correlated with the benefit level. In the worlds where there’s not much future risk of a LWer someday posting a dangerous capabilities insight, there’s also less future benefit to LW posts, since we’re probably not generating many useful ideas in general (especially about AGI and AGI alignment).
“it seems important to know which problems in capabilities research can be alleviate”—What specific safety progress does this enable? (Maybe there’s something, but ‘it seems safety-relevant because it’s a fact about ML’ seems to prove too much. What’s the actual implicit path by which humanity ends up safely navigating the AGI transition?)
‘We should require a high bar before we’re willing to not-post potentially-world-destroying information to LW, because LW has a strong commitment to epistemic rationality’ seems like an obviously terrible argument to me. People should not post stuff to the public Internet that destroys the world just because the place they’re posting is a website that cares about Bayesianism and belief accuracy.
(Note that this is a high-level point I’m making about the kind of arguments being made here, not about the object-level question.)
Thanks for your answer!
This seems correct, though it’s still valuable to flesh out that it seems possible to have LW posts that are helpful for alignment but not for capabilities: namely, such posts that summarize insights from capabilities research that are known to ~all capabilities people while known to few alignment people.
The main reason I shifted more to your viewpoint now is that capabilities insights might influence people who do not yet know a lot about capabilities to work on that in the future, instead of working on alignment. Therefore, I’m also not sure if Marius’ heuristic “Has company-X-who-cares-mostly-about-capabilities likely thought about this already?” for deciding whether something is infohazardy is safe.
Yes, that seems correct (though I’m a bit unhappy about you bluntly straw-manning my position). I think after reflection I would phrase my point as follows:
”There is a conflict between Lesswrongs commitment to epistemic rationality on the one hand, and the commitment to restrict info hazards on the other hand. Lesswrong’s commitment to epistemic rationality exists for good reasons, and should not be given up lightly. Therefore, whenever we restrict discussion and information about certain topics, we should have thought about this with great care.”
I don’t yet have a fleshed-out view on this, but I did move a bit in Tom’s direction.
I think that argument is good if you expand out its reasoning. The reason we have a strong commitment to epistemic rationality is because learning and teaching true things is almost always very good. You need to establish a fair chunk of probable bad to outweigh it.
Thanks for your reply! I think I basically agree with all of your points. I feel a lot of frustration around the fact that we don’t seem to have adequate infohazard policies to address this. It seems like a fundamental trade-off between security and openness/earnestness of discussion does exist though.
It could be the case that this community is not the correct place to enforce this rules, as there does still exist a substantial gap between “this thing could work” and “we have a working system”. This is doubly true in DL where implementation details matter a great deal.
My tentative heuristic for whether you should publish a post that is potentially infohazardy is “Has company-X-who-cares-mostly-about-capabilities likely thought about this already?”. It’s obviously non-trivial to answer that question but I’m pretty sure most companies who build LLMs have looked at Chinchilla and come to similar conclusions as this post. In case you’re unsure, write up the post in a google doc and ask someone who has thought more about infohazards whether they would publish it or not.
Also, I think Leon underestimates how fast a post can spread even if it is just intended for an alignment audience on LW.