The first MIRI paper to use the term is “Aligning Superintelligence with Human Interests: A Technical Research Agenda” from 2014, the original version of which appears not to exist anywhere on the internet anymore, replaced by the 2017 rewrite “Agent Foundations for Aligning Machine Intelligence with Human Interests:A Technical Research Agenda”. Previous papers sometimes offhandedly talked about AI being aligned with human values as one choice of wording among many.
Edit: the earliest citation I can find for Russell talking about alignment is also 2014.
Regarding v1 of the “Agent Foundations...” paper (then called “Aligning Superintelligence with Human Interests: A Technical Research Agenda”), the original file is here.
To make it easier to find older versions of MIRI papers and see whether there are substantive changes (e.g., for purposes of citing a claim), I’ve made a https://intelligence.org/revisions/ page listing obsolete versions of a bunch of papers.
Regarding the term “alignment” as a name for the field/problem: my recollection is that Stuart Russell suggested the term to MIRI in 2014, before anyone started using it publicly. We ran with “(AI) alignment” instead of “value alignment” because we didn’t want people to equate the value learning problem with the whole alignment problem.
(I also think “value alignment” is confusing because it can be read as saying humans and AI systems both have values, and we’re trying to bring the two parties’ values into alignment. This conflicts with the colloquial use of “values,” which treats it as more of a human thing, compared to more neutral terms like “goals” or “preferences.” And Eliezer has historically used “values” to specifically refer to humanity’s true preferences.)
Footnote: Looks like MIRI was using “Friendly AI” in our research agenda drafts as of Oct. 23, and we switched to “aligned AI” by Nov. 20 (though we were using phrasings like “reliably aligned with the intentions of its programmers” earlier than that).
The first MIRI paper to use the term is “Aligning Superintelligence with Human Interests: A Technical Research Agenda” from 2014, the original version of which appears not to exist anywhere on the internet anymore, replaced by the 2017 rewrite “Agent Foundations for Aligning Machine Intelligence with Human Interests:A Technical Research Agenda”. Previous papers sometimes offhandedly talked about AI being aligned with human values as one choice of wording among many.
Edit: the earliest citation I can find for Russell talking about alignment is also 2014.
Regarding v1 of the “Agent Foundations...” paper (then called “Aligning Superintelligence with Human Interests: A Technical Research Agenda”), the original file is here.
To make it easier to find older versions of MIRI papers and see whether there are substantive changes (e.g., for purposes of citing a claim), I’ve made a https://intelligence.org/revisions/ page listing obsolete versions of a bunch of papers.
Regarding the term “alignment” as a name for the field/problem: my recollection is that Stuart Russell suggested the term to MIRI in 2014, before anyone started using it publicly. We ran with “(AI) alignment” instead of “value alignment” because we didn’t want people to equate the value learning problem with the whole alignment problem.
(I also think “value alignment” is confusing because it can be read as saying humans and AI systems both have values, and we’re trying to bring the two parties’ values into alignment. This conflicts with the colloquial use of “values,” which treats it as more of a human thing, compared to more neutral terms like “goals” or “preferences.” And Eliezer has historically used “values” to specifically refer to humanity’s true preferences.)
Footnote: Looks like MIRI was using “Friendly AI” in our research agenda drafts as of Oct. 23, and we switched to “aligned AI” by Nov. 20 (though we were using phrasings like “reliably aligned with the intentions of its programmers” earlier than that).
I recall Eliezer saying that Stuart Russell named the ‘value alignment problem’, and that it was derived from that. (Perhaps Eliezer derived it?)
I recall Eliezer asking on Facebook for a good word for the field of AI safety research before it was called alignment.
Would be interested in a link if anyone is willing to go look for it.