I don’t see a reason why you can’t do it even if it’s wrong.
If it’s wrong, then it’s not clear what exactly are we doing. If you run out of sample space, there is no way to correct this mistake, because it’s what the sample space means, options still available.
The problem in choice of sample space for Bayesian updating is the same problem as in finding the formalism for encoding a solution to the ontology problem (a preference that is no longer in danger of being defined in terms of misconceptions).
(And if there is no way to define a correct “sample space”, the framework itself is bogus, though the solution seems to be that we shouldn’t seek a set of all possible worlds, but rather a set of all possible thoughts...)
I don’t see a reason why you can’t do it even if it’s wrong.
If it’s wrong, then it’s not clear what exactly are we doing. If you run out of sample space, there is no way to correct this mistake, because it’s what the sample space means, options still available.
As I see it, if it’s wrong, then you’re calculating probabilities for a different “metaverse”. As I said, it seems to me that if you’re wrong then you should be able to notice: if you do something many times, and you calculate each time a probability 70% of result X, but you never get that X, it’s obvious you’re doing things wrong. And since, as far as I can tell, the structure of the calculation is really correct, then it must be your premise that’s wrong, i.e. which world-set (and measure) you picked to calculate in.
The case where you run out of space is just the more obvious one: it’s the world-set you used that’s wrong (because you ran out of worlds before you used the measure).
Again, my interest in this is not that it seems like a good way of calculating probabilities. It’s that it seems like the only possible way, hard or impossible as it may be.
Part of the reason for publishing this was to (hopefully) find through the comments another “formalism” that (a) seems right, (b) seems general, but (c) isn’t reducible to that schema.
If it’s wrong, then it’s not clear what exactly are we doing. If you run out of sample space, there is no way to correct this mistake, because it’s what the sample space means, options still available.
The problem in choice of sample space for Bayesian updating is the same problem as in finding the formalism for encoding a solution to the ontology problem (a preference that is no longer in danger of being defined in terms of misconceptions).
(And if there is no way to define a correct “sample space”, the framework itself is bogus, though the solution seems to be that we shouldn’t seek a set of all possible worlds, but rather a set of all possible thoughts...)
As I see it, if it’s wrong, then you’re calculating probabilities for a different “metaverse”. As I said, it seems to me that if you’re wrong then you should be able to notice: if you do something many times, and you calculate each time a probability 70% of result X, but you never get that X, it’s obvious you’re doing things wrong. And since, as far as I can tell, the structure of the calculation is really correct, then it must be your premise that’s wrong, i.e. which world-set (and measure) you picked to calculate in.
The case where you run out of space is just the more obvious one: it’s the world-set you used that’s wrong (because you ran out of worlds before you used the measure).
Again, my interest in this is not that it seems like a good way of calculating probabilities. It’s that it seems like the only possible way, hard or impossible as it may be.
Part of the reason for publishing this was to (hopefully) find through the comments another “formalism” that (a) seems right, (b) seems general, but (c) isn’t reducible to that schema.