Reference Classes for Randomness
(Follow-up to Randomness vs. Ignorance)
I’ve claimed that, if you roll a die, your uncertainty about the result of the roll is random, because, in 1/6th of all situations where one has just rolled a die, it will come up a three. Conversely, if you wonder about the existence of a timeless God, whatever uncertainty you have is ignorance. In this post, I make the case that this distinction isn’t just an analog to probability inside vs. outside a model, but is actually fundamental (if some more ideas are added).
The randomness in the above example doesn’t come from some inherent “true randomness” of the die. In fact, this notion of randomness is compatible with determinism. (You could then argue it is not real randomness but just ignorance in disguise, but please just accept the term randomness, whenever I bold it, as a working definition.) This randomness is simply the result of taking all situations which are identical to the current one from your perspective, and observing that, among those, one in six will have the die come up a three. This is a general principle that can be applied to any situation: a fair die, a biased die, delay in traffic, whatever.
The “identical” in the last paragraph needs unpacking. If you roll a die and we consider only the situations that are exactly identical from your perspective, then the die will come up a three either in a lot more or a lot less than 1/6th of them. Regardless of whether the universe is fully deterministic or not, the current state of the die is sure to at least correlate with the chance for a three to end up on top.
However, you are not actually able to distinguish between the situation where you just rolled a die in such a way that it will come up a three, and the situation where you just rolled a die in such a way that it will come up a five, and thus you need to group both situations together. More precisely, you need to group all situations that, to you, look indistinguishable with respect to the result of the die, into one class. Then, if among all situations that belong to this class, the die comes up a three in 1/6th of them, your uncertainty with respect to the die roll is random with probability for a three. This grouping is based both on computational limitations (you see the die but can’t compute how it’ll land) and on missing information (you don’t see the die). If you were replaced by a superintelligent agent, their reference class would be smaller, but some grouping based on hidden information would remain. Formally, think of an equivalence relation on the set of all brain states.
So at this point, I’ve based the definition of randomness both on a frequentist principle (counting the number of situations where the die comes up a three vs not a three) and on a more Bayesian-like principle of subjective uncertainty (taking your abilities as a basis for the choice of reference class). Maybe this doesn’t yet look like a particularly smart way to do it. But with this post, I am only arguing that this model is consistent: all uncertainty can be viewed as made up of randomness and/or ignorance and no contradictions arise. In the next post, I’ll argue that it’s also quite useful, in that several controversial problems are answered immediately by adopting this view.
Related: https://plato.stanford.edu/entries/chance-randomness/