I think this argument, if true, mostly says that your work on RLHF must have been net-neutral, because people would have done RLHF even if nobody did it for the purposes of alignment.
Doing things sooner and in a different way matters.
This argument is like saying that scaling up language models is net-neutral for AGI, because people would have done it anyway for non-AGI purposes. Doing things sooner matters a lot. I think in most of science and engineering that’s the main kind of effect that anything has.
If false, then RLHF was net-negative because of its capabilities externalities.
No, if false then it has a negative effect which must be quantitatively compared against positive effects.
Most things have some negative effects (e.g. LW itself).
It is also far easier to make progress on capabilities than alignment
This doesn’t seem relevant—we were asking how large an accelerating effect alignment researchers have relative to capabilities researchers (since that determines how many days of speed-up they cause), so if capabilities progress is easier then that seems to increase both numerator and denominator.
especially when you’re not trying to make progress on alignment’s core problems, and instead trying to get very pretty lines on graphs so you can justify your existence to your employer.
To the extent this is a claim about my motivations, I think it’s false. (I don’t think it should look especially plausible from the outside given the overall history of my life.)
As a claim about what matters to alignment and what is “core” it’s merely totally unjustified.
It also, empirically, just seems weird that GPT and RLHF were both developed as alignment strategies, yet have so many uses in capabilities.
This is false, so it makes sense it would seem weird!
Doing things sooner and in a different way matters.
This argument is like saying that scaling up language models is net-neutral for AGI, because people would have done it anyway for non-AGI purposes. Doing things sooner matters a lot. I think in most of science and engineering that’s the main kind of effect that anything has.
No, if false then it has a negative effect which must be quantitatively compared against positive effects.
Most things have some negative effects (e.g. LW itself).
This doesn’t seem relevant—we were asking how large an accelerating effect alignment researchers have relative to capabilities researchers (since that determines how many days of speed-up they cause), so if capabilities progress is easier then that seems to increase both numerator and denominator.
To the extent this is a claim about my motivations, I think it’s false. (I don’t think it should look especially plausible from the outside given the overall history of my life.)
As a claim about what matters to alignment and what is “core” it’s merely totally unjustified.
This is false, so it makes sense it would seem weird!