AGI alignment overviews: “Eliezer Yudkowsky and I will be splitting our time between working on these problems and doing expository writing. Eliezer is writing about alignment theory, while I’ll be writing about MIRI strategy and forecasting questions.”
Basically explaining why people still don’t get AI safety is a very important task and Eliezer is particularly well suited for it.
See MIRI’s recent post under “Going Forward”: https://intelligence.org/2017/03/28/2016-in-review/
Basically explaining why people still don’t get AI safety is a very important task and Eliezer is particularly well suited for it.