I think it sounds worse. If an AI more friendly than that turns out to be impossible I’d probably go for the negative utilitarian route and give the AI a goal of minimizing anything that might have any kind of subjective experience. Including itself once it’s done.
I think it sounds worse. If an AI more friendly than that turns out to be impossible I’d probably go for the negative utilitarian route and give the AI a goal of minimizing anything that might have any kind of subjective experience. Including itself once it’s done.