Obviously I think it’s worth being careful, but I think in general it’s actually relatively hard to accidentally advance capabilities too much by working specifically on alignment. Some reasons:
Researchers of all fields tend to do this thing where they have really strong conviction in their direction and think everyone should work on their thing. Convincing them that some other direction is better is actually pretty hard even if you’re trying to shove your ideas down their throats.
Often the bottleneck is not that nobody realizes that something is a bottleneck, but rather that nobody knows how to fix it. In these cases, calling attention to the bottleneck doesn’t really speed things up, whereas for thinking about alignment we can reason about what things would look like if it were to be solved.
It’s generally harder to make progress on something by accident than to make progress on purpose on something if you try really hard to do it. I think this is true even if there is a lot of overlap. There’s also an EMH argument one could make here but I won’t spell it out.
I think the alignment community thinking correctly is essential for solving alignment. Especially because we will have very limited empirical evidence before AGI, and that evidence will not be obviously directly applicable without some associated abstract argument, any trustworthy alignment solution has to route through the community reasoning sanely.
Also to be clear I think the “advancing capabilities is actually good because it gives us more information on what AGI will look like” take is very bad and I am not defending it. The arguments I made above don’t apply, because they basically hinge on work on alignment not actually advancing capabilities.
I don’t think RLHF in particular had a very large counterfactual impact on commercialization or the arms race. The idea of non-RL instruction tuning for taking base models and making them more useful is very obvious for commercialization (there are multiple concurrent works to InstructGPT). PPO is better than just SFT or simpler approaches on top of SFT, but not groundbreakingly more so. You can compare text-davinci-002 (FeedME) and text-davinci-003 (PPO) to see.
The arms race was directly caused by ChatGPT, which took off quite unexpectedly not because of model quality due to RLHF, but because the UI was much more intuitive to users than the Playground (instruction following GPT3.5 was already in the API and didn’t take off in the same way). The tech tree from having a powerful base model to having a chatbot is not constrained on RLHF existing at all, either.
To be clear, I happen to also not be very optimistic about the alignment relevance of RLHF work beyond the first few papers—certainly if someone were to publish a paper today making RLHF twice as data efficient or whatever I would consider this basically just a capabilities paper.
I think empirically EA has done a bunch to speed up capabilities accidentally. And I think theoretically we’re at a point in history where simply sharing an idea can get it in the water supply faster than ever before.
A list of unsolved problems, if one of them is both true and underappreciated, can have a big impact.
Obviously I think it’s worth being careful, but I think in general it’s actually relatively hard to accidentally advance capabilities too much by working specifically on alignment. Some reasons:
Researchers of all fields tend to do this thing where they have really strong conviction in their direction and think everyone should work on their thing. Convincing them that some other direction is better is actually pretty hard even if you’re trying to shove your ideas down their throats.
Often the bottleneck is not that nobody realizes that something is a bottleneck, but rather that nobody knows how to fix it. In these cases, calling attention to the bottleneck doesn’t really speed things up, whereas for thinking about alignment we can reason about what things would look like if it were to be solved.
It’s generally harder to make progress on something by accident than to make progress on purpose on something if you try really hard to do it. I think this is true even if there is a lot of overlap. There’s also an EMH argument one could make here but I won’t spell it out.
I think the alignment community thinking correctly is essential for solving alignment. Especially because we will have very limited empirical evidence before AGI, and that evidence will not be obviously directly applicable without some associated abstract argument, any trustworthy alignment solution has to route through the community reasoning sanely.
Also to be clear I think the “advancing capabilities is actually good because it gives us more information on what AGI will look like” take is very bad and I am not defending it. The arguments I made above don’t apply, because they basically hinge on work on alignment not actually advancing capabilities.
Hasn’t the alignment community historically done a lot to fuel capabilities?
For example, here’s an excerpt from a post I read recently
I don’t think RLHF in particular had a very large counterfactual impact on commercialization or the arms race. The idea of non-RL instruction tuning for taking base models and making them more useful is very obvious for commercialization (there are multiple concurrent works to InstructGPT). PPO is better than just SFT or simpler approaches on top of SFT, but not groundbreakingly more so. You can compare text-davinci-002 (FeedME) and text-davinci-003 (PPO) to see.
The arms race was directly caused by ChatGPT, which took off quite unexpectedly not because of model quality due to RLHF, but because the UI was much more intuitive to users than the Playground (instruction following GPT3.5 was already in the API and didn’t take off in the same way). The tech tree from having a powerful base model to having a chatbot is not constrained on RLHF existing at all, either.
To be clear, I happen to also not be very optimistic about the alignment relevance of RLHF work beyond the first few papers—certainly if someone were to publish a paper today making RLHF twice as data efficient or whatever I would consider this basically just a capabilities paper.
I think empirically EA has done a bunch to speed up capabilities accidentally. And I think theoretically we’re at a point in history where simply sharing an idea can get it in the water supply faster than ever before.
A list of unsolved problems, if one of them is both true and underappreciated, can have a big impact.