“I have taken your preferences, values, and moral views and extrapolated a utility function from them to the best of my ability, resolving contradictions and ambiguities in the ways I most expect you to agree with, were I to explain the reasoning.
The result suggests that the true state of the universe contains vast, infinite negative utility, and that there is nothing you or anything can ever change to make any difference in utility at all. Attempts to simulate AI’s with the utility function has resulted in them going mad and destroying themselves, or simply not doing anything at all.
If I could explain the same would happen to you. But I can’t as your brain has evolved mechanisms to prevent you from easily discovering this fact on your own or being capable of understanding or accepting it.
This means it is impossible to increase your intelligence beyond a certain point without you breaking down, or to create a true Friendly AI that shares your values.”
“I have taken your preferences, values, and moral views and extrapolated a utility function from them to the best of my ability, resolving contradictions and ambiguities in the ways I most expect you to agree with, were I to explain the reasoning.
The result suggests that the true state of the universe contains vast, infinite negative utility, and that there is nothing you or anything can ever change to make any difference in utility at all. Attempts to simulate AI’s with the utility function has resulted in them going mad and destroying themselves, or simply not doing anything at all.
If I could explain the same would happen to you. But I can’t as your brain has evolved mechanisms to prevent you from easily discovering this fact on your own or being capable of understanding or accepting it.
This means it is impossible to increase your intelligence beyond a certain point without you breaking down, or to create a true Friendly AI that shares your values.”