I recall Eliezer saying that Stuart Russell named the ‘value alignment problem’, and that it was derived from that. (Perhaps Eliezer derived it?)
I recall Eliezer asking on Facebook for a good word for the field of AI safety research before it was called alignment.
Would be interested in a link if anyone is willing to go look for it.
I recall Eliezer saying that Stuart Russell named the ‘value alignment problem’, and that it was derived from that. (Perhaps Eliezer derived it?)
I recall Eliezer asking on Facebook for a good word for the field of AI safety research before it was called alignment.
Would be interested in a link if anyone is willing to go look for it.