The difficult part seems to be how to create indifference without taking away the ability to optimize.
ADDED: If I apply the analogy of an operational amplifier directly then it appears as im indifference can only be achieved via taking away the feedback—and thus any control. But with AI we could model this as a box within a box (possibly recursively) where only a feedback into inner boxes is compensated. Does this analogy make sense?
The difficult part seems to be how to create indifference without taking away the ability to optimize.
ADDED: If I apply the analogy of an operational amplifier directly then it appears as im indifference can only be achieved via taking away the feedback—and thus any control. But with AI we could model this as a box within a box (possibly recursively) where only a feedback into inner boxes is compensated. Does this analogy make sense?