This is off the top of my head, so I apologize if it ends up being ill-conceived:
Imagine we take a lottery with 50% odds of winning (W) or losing (L), where W gives us lots of utility and L gives us very little or none (or negative!). But we don’t find out for a couple weeks whether we won or not, so until we find out all of our decisions become more complex—we have to plan for both case W and case L. Since we have two possible cases with equal probability, this (at maximum) doubles the amount of planning we have to do—it adds one bit to the computational complexity of our plans. If we have ten million free bits of capacity, that’s no big deal, but if we only have five bits, that’s a pretty big chunk—it substantially decreases our ability to optimize. So then we should be able to plot the marginal utility of gaining or losing one bit of computational capacity and plug it in as a term in our overall utility function.
Did that make any sense, or have I just gone crazy?
This is off the top of my head, so I apologize if it ends up being ill-conceived:
Imagine we take a lottery with 50% odds of winning (W) or losing (L), where W gives us lots of utility and L gives us very little or none (or negative!). But we don’t find out for a couple weeks whether we won or not, so until we find out all of our decisions become more complex—we have to plan for both case W and case L. Since we have two possible cases with equal probability, this (at maximum) doubles the amount of planning we have to do—it adds one bit to the computational complexity of our plans. If we have ten million free bits of capacity, that’s no big deal, but if we only have five bits, that’s a pretty big chunk—it substantially decreases our ability to optimize. So then we should be able to plot the marginal utility of gaining or losing one bit of computational capacity and plug it in as a term in our overall utility function.
Did that make any sense, or have I just gone crazy?