A bayesian might not like this, because they’d prefer prove theorems like “The algorithm works well on average for a random environment drawn from the prior the agents use,” for which randomness is never useful.
It seems like a bayesian can conclude that randomness is useful, if their prior puts significant weight on “the environment happens to contain something that iterates over my decision algorithm and returns its worst-case input, or something that’s equivalent to or approximates this” (which they should, especially after updating on their own existence). I guess right now we don’t know how to handle this in a naturalistic way (e.g., let both intentional and accidental adversaries fall out of some simplicity prior) and so are forced to explicitly assume the existence of adversaries (as in game theory and this post).
It seems like a bayesian can conclude that randomness is useful, if their prior puts significant weight on “the environment happens to contain something that iterates over my decision algorithm and returns its worst-case input, or something that’s equivalent to or approximates this” (which they should, especially after updating on their own existence). I guess right now we don’t know how to handle this in a naturalistic way (e.g., let both intentional and accidental adversaries fall out of some simplicity prior) and so are forced to explicitly assume the existence of adversaries (as in game theory and this post).