I would agree that it would be good and reasonable to have a term to refer to the family of scientific and philosophical problem spanned by this space. At the same time, as the post says, the issue is when there is semantic dilution, people talking past each other, and coordination-inhibiting ambiguity.
P3 seems helpful but insufficient for good long term outcomes
Now take a look at something I could check with a simple search: an ICML Workshop that uses the term alignment mostly to mean P3 (task-reliability) https://arlet-workshop.github.io/
One might want to use alignment one way or the other, and be careful of the limited overlap with P3 in our own registers, but by the time the larger AI community has picked up on the use-semantics of ‘RLHF is an alignment technique’ and associated alignment primarily with task-reliability, you’d need some linguistic interventions and deliberation to clear the air.
Not to pick on you specifically, but just as a general comment, I’m getting a bit worried about the rationalist decontextualized content policing. It seems it usually goes like this: someone cultivates an epistemological practice (say how to extract conceptual insights from diverse practices) → they decide to cross-post their thoughts on a community blog interested in epistemology → somebody else unfamiliar with the former’s body of work comes across it → interprets it into a pattern they might rightfully have identified as critique-worthy → dump the criticism there. So maybe it’d be better if comments were written by people who can click through the author’s profile to interpret the post in the right context.
[Epistemic status of this comment: Performative, but not without substance.]