Does “crisper” mean “more correct”, or just “more detailed, but with the same accuracy”? And is it correct to be more conservative (counterintuitive or no)? Assuming it’s a problem, why is this a problem with minmaxing, as opposed to a problem with world-modeling in general?
I suspect, for most decisions, there is a “good choice for your situation” range, and whether you call it conservative or aggressive depends on framing to an extent that it shouldn’t be a primary metric.
I meant “more detailed”. I don’t know what you mean with “correct” here (what criterion do you apply for correctness of decision methods?). Minmaxing as a decision procedure is different from world modeling, which precedes the decision. I think that e.g. expected value becomes less conservative given more detailed models of the world.
For example in game theory, if you learn about other possible actions the other player could take, and the associated payoffs, the value you are optimizing on via minmax can only go down. (The expected value of the action you take in the end might go up, though).
I’m not sure what you mean with the last paragraph. I was using conservative in the sense of “the value your are optimising on via minmax becomes smaller”. I’d concretize “aggressive” as “fragile to perturbations from adversaries” or something along those lines.
Does “crisper” mean “more correct”, or just “more detailed, but with the same accuracy”? And is it correct to be more conservative (counterintuitive or no)? Assuming it’s a problem, why is this a problem with minmaxing, as opposed to a problem with world-modeling in general?
I suspect, for most decisions, there is a “good choice for your situation” range, and whether you call it conservative or aggressive depends on framing to an extent that it shouldn’t be a primary metric.
I meant “more detailed”. I don’t know what you mean with “correct” here (what criterion do you apply for correctness of decision methods?). Minmaxing as a decision procedure is different from world modeling, which precedes the decision. I think that e.g. expected value becomes less conservative given more detailed models of the world.
For example in game theory, if you learn about other possible actions the other player could take, and the associated payoffs, the value you are optimizing on via minmax can only go down. (The expected value of the action you take in the end might go up, though).
I’m not sure what you mean with the last paragraph. I was using conservative in the sense of “the value your are optimising on via minmax becomes smaller”. I’d concretize “aggressive” as “fragile to perturbations from adversaries” or something along those lines.
Ah these are actually not true.