It is probably worth noting here that AI’s ability to evaluate measure of matching your wish and consequences that you need is, in turn, limited by its own ability to evaluate consequences of its actions (if we apply the constraint that you are talking about to AI itself). That can easily turn into requirement of building a Maxwell’s demon or AI admitting (huh..) that it is doing something about which it doesn’t know if it will match your wish or not.
It is probably worth noting here that AI’s ability to evaluate measure of matching your wish and consequences that you need is, in turn, limited by its own ability to evaluate consequences of its actions (if we apply the constraint that you are talking about to AI itself). That can easily turn into requirement of building a Maxwell’s demon or AI admitting (huh..) that it is doing something about which it doesn’t know if it will match your wish or not.