I read it as the case of a trade-off between an average (expected) cost and a guaranteed bound on that cost in the worst possible case. Such a trade-off sounds “normal” to me and often occurs in practice.
Intuitively, if you have some uncertain structure in your data, you can try to exploit it which leads to better average/expected results, but—if Mr.Murphy is in a particularly bad mood today—can also set you up for major fail. Sometimes you care about the average cost, and you accept the risk of being “unlucky” in your data. But sometimes you care about the average cost less and about the worst-case scenario more. And then you will be interested in reducing the upper bound on your cost, e.g. through randomizing.
I read it as the case of a trade-off between an average (expected) cost and a guaranteed bound on that cost in the worst possible case. Such a trade-off sounds “normal” to me and often occurs in practice.
Intuitively, if you have some uncertain structure in your data, you can try to exploit it which leads to better average/expected results, but—if Mr.Murphy is in a particularly bad mood today—can also set you up for major fail. Sometimes you care about the average cost, and you accept the risk of being “unlucky” in your data. But sometimes you care about the average cost less and about the worst-case scenario more. And then you will be interested in reducing the upper bound on your cost, e.g. through randomizing.