This doesn’t (I think) really have much to do with randomness as such. The relevant thing about R is that it’s shared information that a hypothetical adversary doesn’t get to see.
If uy isn’t chosen adversarially, then our players don’t care about pessimizing over uy but about something like an average over uy, and then R isn’t needed. Or, if they are ultra-cautious people who universally care about worst cases, then they don’t care about expectation w.r.t. R but about the worst case as R varies, and then R doesn’t help. R only helps them when uy is chosen adversarially and R isn’t; or when they care about pessimizing w.r.t uy but not w.r.t. R.
So the real conclusion here is not “sometimes shared randomness helps”, it’s “if some people are trying to coordinate in the presence of an adversary, anything they share and the adversary has no access to helps”.
Sometimes you want to prove a theorem like “The algorithm works well.” You generally need randomness if you want to find algorithms that work without strong assumptions on the environment, whether or not there is really an adversary (who knows what kinds of correlations exist in the environment, whether or not you call them an “adversary”).
A bayesian might not like this, because they’d prefer prove theorems like “The algorithm works well on average for a random environment drawn from the prior the agents use,” for which randomness is never useful.
But specifying the true prior is generally hideously intractable. So a slightly more wise Bayesian might want to prove statements like “The algorithm well on average for a random environment drawn from the real prior” where the “real prior” is some object that we can talk about but have no explicit access to. And now the wiser Bayesian is back to needing randomness.
A bayesian might not like this, because they’d prefer prove theorems like “The algorithm works well on average for a random environment drawn from the prior the agents use,” for which randomness is never useful.
It seems like a bayesian can conclude that randomness is useful, if their prior puts significant weight on “the environment happens to contain something that iterates over my decision algorithm and returns its worst-case input, or something that’s equivalent to or approximates this” (which they should, especially after updating on their own existence). I guess right now we don’t know how to handle this in a naturalistic way (e.g., let both intentional and accidental adversaries fall out of some simplicity prior) and so are forced to explicitly assume the existence of adversaries (as in game theory and this post).
Sure, but non-adversarial cases (really, any cases where u is determined independently of strategies chosen) can just choose R as a fixed part of the strategy, rather than a random shared component determined later.
Nope. Random choice gives a specific value for R each game. The outcome for that iteration is IDENTICAL to the outcome if that R was chosen intentionally. Randomness only has game value as a mechanism to keep information from an adversarial actor.
To be clear, by “worst-case guarantee” I mean “the expected utility is guaranteed to be pretty good regardless of uy”, which is unattainable without shared randomness (claim 1).
I think you are either misunderstanding or disagreeing with a lot of the terminology on randomized algorithms and worst-case guarantees that are commonly used in CS and statistics. This article is a decent introduction to this topic.
This doesn’t (I think) really have much to do with randomness as such. The relevant thing about R is that it’s shared information that a hypothetical adversary doesn’t get to see.
If uy isn’t chosen adversarially, then our players don’t care about pessimizing over uy but about something like an average over uy, and then R isn’t needed. Or, if they are ultra-cautious people who universally care about worst cases, then they don’t care about expectation w.r.t. R but about the worst case as R varies, and then R doesn’t help. R only helps them when uy is chosen adversarially and R isn’t; or when they care about pessimizing w.r.t uy but not w.r.t. R.
So the real conclusion here is not “sometimes shared randomness helps”, it’s “if some people are trying to coordinate in the presence of an adversary, anything they share and the adversary has no access to helps”.
Sometimes you want to prove a theorem like “The algorithm works well.” You generally need randomness if you want to find algorithms that work without strong assumptions on the environment, whether or not there is really an adversary (who knows what kinds of correlations exist in the environment, whether or not you call them an “adversary”).
A bayesian might not like this, because they’d prefer prove theorems like “The algorithm works well on average for a random environment drawn from the prior the agents use,” for which randomness is never useful.
But specifying the true prior is generally hideously intractable. So a slightly more wise Bayesian might want to prove statements like “The algorithm well on average for a random environment drawn from the real prior” where the “real prior” is some object that we can talk about but have no explicit access to. And now the wiser Bayesian is back to needing randomness.
It seems like a bayesian can conclude that randomness is useful, if their prior puts significant weight on “the environment happens to contain something that iterates over my decision algorithm and returns its worst-case input, or something that’s equivalent to or approximates this” (which they should, especially after updating on their own existence). I guess right now we don’t know how to handle this in a naturalistic way (e.g., let both intentional and accidental adversaries fall out of some simplicity prior) and so are forced to explicitly assume the existence of adversaries (as in game theory and this post).
This seems basically right. As discussed in the conclusion, there are reasons to care about worst-case performance other than literal adversaries.
Sure, but non-adversarial cases (really, any cases where u is determined independently of strategies chosen) can just choose R as a fixed part of the strategy, rather than a random shared component determined later.
That’s right, but getting the worst-case guarantee requires this initial choice to be random.
Nope. Random choice gives a specific value for R each game. The outcome for that iteration is IDENTICAL to the outcome if that R was chosen intentionally. Randomness only has game value as a mechanism to keep information from an adversarial actor.
To be clear, by “worst-case guarantee” I mean “the expected utility is guaranteed to be pretty good regardless of uy”, which is unattainable without shared randomness (claim 1).
I think you are either misunderstanding or disagreeing with a lot of the terminology on randomized algorithms and worst-case guarantees that are commonly used in CS and statistics. This article is a decent introduction to this topic.