I think it makes sense to keep the term “Friendly AI” for “stably self-modifying AGI which optimizes for humane values.”
Narrow AI safety work — for extant systems and near-term future systems — goes by many different names. We (at MIRI) have touched on that work in several posts and interviews, e.g.:
I think it makes sense to keep the term “Friendly AI” for “stably self-modifying AGI which optimizes for humane values.”
Narrow AI safety work — for extant systems and near-term future systems — goes by many different names. We (at MIRI) have touched on that work in several posts and interviews, e.g.:
Transparency in safety-critical systems
Groundwork for AGI safety engineering
Michael Fisher interview
Paulo Tabuada interview
Diana Spears interview
Anil Nerode interview
Armando Tacchella interview
Andre Platzer interview
Kathleen Fisher interview