Eliezer Yudkowsky: I’ve given up (actually never endorsed in the first place) the term “AI safety”; “AI alignment” is the name of the field worth saving. (Though if I can, I’ll refer to it as “AI notkilleveryoneism” instead, since “alignment” is also coopted to mean systems that scold users.)
I just use “AI existential safety”.
It has exactly the same number of letters as “AI notkilleveryoneism” (counting the single space in “AI notkilleveryoneism” and 2 spaces in “AI existential safety” as letters).
Although I see that my resume still lists the following among my main objectives: “To contribute to AI safety research”. I think I’ll go ahead and insert “existential” there in order to disambiguate...
I just use “AI existential safety”.
It has exactly the same number of letters as “AI notkilleveryoneism” (counting the single space in “AI notkilleveryoneism” and 2 spaces in “AI existential safety” as letters).
Although I see that my resume still lists the following among my main objectives: “To contribute to AI safety research”. I think I’ll go ahead and insert “existential” there in order to disambiguate...