I’m unfamiliar with the state of our knowledge concerning these things, so take this as you will,
A perfect utility function can yield many different things one of which is the adherence to “the principal for the devlopment of value(s) in human beings” which aren’t necessary the same as “values that make existing in the universe most probable” or “what people want” or “what people will always want”.
a human optimal utility function would be something that leads to adressing the human condition as a problem, to improve it in the manner and method it seeks to improve itself, whether that is survivability or something else.
An AI that could do this perfectly right now, could always use the same process of extrapolation again for whatever the situation may develop into.
or “AI which is most instrumentally useful for (all) human beings given our most basic goals”
This could mean several things. What do you mean?
I’m unfamiliar with the state of our knowledge concerning these things, so take this as you will, A perfect utility function can yield many different things one of which is the adherence to “the principal for the devlopment of value(s) in human beings” which aren’t necessary the same as “values that make existing in the universe most probable” or “what people want” or “what people will always want”. a human optimal utility function would be something that leads to adressing the human condition as a problem, to improve it in the manner and method it seeks to improve itself, whether that is survivability or something else. An AI that could do this perfectly right now, could always use the same process of extrapolation again for whatever the situation may develop into.
or “AI which is most instrumentally useful for (all) human beings given our most basic goals”
As things are perfect in relation to utility functions, I still don’t understand.
as in producing the intended result, nothing stopping us from rounding the 1 and winding up as paperclips