Going from probability of anticipated experience to more aggregated, hard-to-resolve probabilities about modeled groupings of experiences (or non-experiences) is not clearly required for anything, but is more of a compression of models, because you can’t actually predict things at the detailed level the universe runs on.
So the map/territory distinction seems VITAL here. Probability is in the map. Models are maps. There’s no similarity molecules or probability fields that tie all die rolls together. It’s just that our models are easier (and still work fairly well) if we treat them similarly because, at the level we’re considering, they share some abstract properties in our models.
Ah, these two comments, and that of G Gordon Worley III, have made me realise that I didn’t at all make explicit that I was taking the Bayesian interpretation of probability as a starting assumption. See my reply to G Gordon Worley III for more on that, and the basic intention of this post (which I’ve now edited to make it clearer).
Yeah, this seems like we’re using “probability” to mean different things.
Probabilities are unavoidable in any rational decision theory. There is no alternative to assigning probabilities to expected experiences conditional on potential actions. https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences .
Going from probability of anticipated experience to more aggregated, hard-to-resolve probabilities about modeled groupings of experiences (or non-experiences) is not clearly required for anything, but is more of a compression of models, because you can’t actually predict things at the detailed level the universe runs on.
So the map/territory distinction seems VITAL here. Probability is in the map. Models are maps. There’s no similarity molecules or probability fields that tie all die rolls together. It’s just that our models are easier (and still work fairly well) if we treat them similarly because, at the level we’re considering, they share some abstract properties in our models.
Ah, these two comments, and that of G Gordon Worley III, have made me realise that I didn’t at all make explicit that I was taking the Bayesian interpretation of probability as a starting assumption. See my reply to G Gordon Worley III for more on that, and the basic intention of this post (which I’ve now edited to make it clearer).