Sure, but there are vastly more constraints involved in maximizing E(U3). It’s easy to maximize E(U(A)) in such a way as to let E(U(B)) go down the tubes. But if C is positively correlated with either A or B, it’s going to be harder to max-min A and B while letting E(U(C)) plummet. The more accounted-for sources of utility there are, the likelier a given unaccounted-for source of utility X is to be entangled with one of the former, so the harder it will be for the AI to max-min in such a way that neglects X. Perhaps it gets exponentially harder! And humans care about hundreds or thousands of things. It’s not clear that a satisficer that’s concerned with a significant fraction of those would be able to devise a strategy that fails utterly to bring the others up to snuff.
I’m simply pointing out that a multi-satisficer is the same as a single-satisficer. Putting all the utilities together is a sensible thing; but making a satisficer out of that combination isn’t.
Sure, but there are vastly more constraints involved in maximizing E(U3). It’s easy to maximize E(U(A)) in such a way as to let E(U(B)) go down the tubes. But if C is positively correlated with either A or B, it’s going to be harder to max-min A and B while letting E(U(C)) plummet. The more accounted-for sources of utility there are, the likelier a given unaccounted-for source of utility X is to be entangled with one of the former, so the harder it will be for the AI to max-min in such a way that neglects X. Perhaps it gets exponentially harder! And humans care about hundreds or thousands of things. It’s not clear that a satisficer that’s concerned with a significant fraction of those would be able to devise a strategy that fails utterly to bring the others up to snuff.
I’m simply pointing out that a multi-satisficer is the same as a single-satisficer. Putting all the utilities together is a sensible thing; but making a satisficer out of that combination isn’t.