I’m taking about relationships like
AGI with explicitly represented utility function which is a reified part of its world- and self- model
or
sure, it has some implicit utility function, but it’s about as inscrutable to the agent itself as it is to us
I’m taking about relationships like
or