Eliezer is still writing AI Alignment content on it, … MIRI … adopt Arbital …
How does Eliezer’s work on Arbital relate to MIRI? Little is publicly visible of what is is doing in MIRI. Is he focusing on Arbital? What is the strategic purpose?
AGI alignment overviews: “Eliezer Yudkowsky and I will be splitting our time between working on these problems and doing expository writing. Eliezer is writing about alignment theory, while I’ll be writing about MIRI strategy and forecasting questions.”
Basically explaining why people still don’t get AI safety is a very important task and Eliezer is particularly well suited for it.
How does Eliezer’s work on Arbital relate to MIRI? Little is publicly visible of what is is doing in MIRI. Is he focusing on Arbital? What is the strategic purpose?
See MIRI’s recent post under “Going Forward”: https://intelligence.org/2017/03/28/2016-in-review/
Basically explaining why people still don’t get AI safety is a very important task and Eliezer is particularly well suited for it.