Can I check that I follow how you recover quantilization?
Are you evaluating distributions over actions, and caring about the worst-case expectation of that distribution?
If so, proposing a particular action is evaluated badly? (Since there’s a utility function in your set that spikes downward at that action.)
But proposing a range of actions to randomize amongst can be assessed to have decent worst-case expected utility, since particular downward spikes get smoothed over, and you can rely on your knowledge of “in-distribution” behaviour?
Edited to add: fwiw it seems awesome to see quantilization formalized as popping out of an adversarial robustness setup! I haven’t seen something like this before, and didn’t notice if the infrabayes tools were building to these kinds of results. I’m very much wanting to understand why this works in my own native-ontology-pieces.
If that’s correct, here are some places this conflicts with my intuition about how things should be done:
I feel awkward about the randomness is being treated essential. I’d rather be able to do something other than randomness in order to get my mild optimization, and something feels unstable/non-compositional about needing randomness in place for your evaluations… (Not that I have an alternative that springs to mind!)
I also feel like “worst case” is perhaps problematic, since it’s bringing maximization in, and you’re then needing to rely on your convex set being some kind of smooth in order to get good outcomes. If I have a distribution over potential utility functions, and quantilize for the worst 10% of possibilities, does that do the same sort of work that “worst case” is doing for mild optimization?
Can I check that I follow how you recover quantilization?
Are you evaluating distributions over actions, and caring about the worst-case expectation of that distribution?
If so, proposing a particular action is evaluated badly? (Since there’s a utility function in your set that spikes downward at that action.)
But proposing a range of actions to randomize amongst can be assessed to have decent worst-case expected utility, since particular downward spikes get smoothed over, and you can rely on your knowledge of “in-distribution” behaviour?
Edited to add: fwiw it seems awesome to see quantilization formalized as popping out of an adversarial robustness setup! I haven’t seen something like this before, and didn’t notice if the infrabayes tools were building to these kinds of results. I’m very much wanting to understand why this works in my own native-ontology-pieces.
If that’s correct, here are some places this conflicts with my intuition about how things should be done:
I feel awkward about the randomness is being treated essential. I’d rather be able to do something other than randomness in order to get my mild optimization, and something feels unstable/non-compositional about needing randomness in place for your evaluations… (Not that I have an alternative that springs to mind!)
I also feel like “worst case” is perhaps problematic, since it’s bringing maximization in, and you’re then needing to rely on your convex set being some kind of smooth in order to get good outcomes. If I have a distribution over potential utility functions, and quantilize for the worst 10% of possibilities, does that do the same sort of work that “worst case” is doing for mild optimization?