This assumption is central to establishing the mathematical structure of expected utility maximization, where you value each possible world separately using the utility function, then take their weighted average. If your preferences were such that A&C > B&C but A&D < B&D, then you wouldn’t be able to do this.
I can imagine having preferences that don’t value each possible world separately. I can also imagine doing other things to my utility function than maximising expectation. For example, if I maximised the top quartile of expected values then I may choose to engage in practices analogous to quantum suicide. That I prefer, in principle, to maximise expected utility is itself a value. It is a value I that I expect to see in most successful agents, for fundamental reasons.
I can imagine having preferences that don’t value each possible world separately. I can also imagine doing other things to my utility function than maximising expectation. For example, if I maximised the top quartile of expected values then I may choose to engage in practices analogous to quantum suicide. That I prefer, in principle, to maximise expected utility is itself a value. It is a value I that I expect to see in most successful agents, for fundamental reasons.