A ‘safely’ aligned powerful AI is one that doesn’t kill everyone on Earth as a side effect of its operation;
-- Eliezer Yudkowsky https://www.lesswrong.com/posts/3e6pmovj6EJ729M2i/general-alignment-plus-human-values-or-alignment-via-human#More_strawberry__less_trouble https://twitter.com/ESYudkowsky/status/1070095952361320448
A ‘safely’ aligned powerful AI is one that doesn’t kill everyone on Earth as a side effect of its operation;
-- Eliezer Yudkowsky https://www.lesswrong.com/posts/3e6pmovj6EJ729M2i/general-alignment-plus-human-values-or-alignment-via-human#More_strawberry__less_trouble https://twitter.com/ESYudkowsky/status/1070095952361320448