The set of all possible worlds is a confusing subject,
It certainly is. That is why the schema above fascinated me when I made it explicit: although the concepts involved are not rigorously defined, the relationships between them (as expressed in the schema) feel rigorously correct. (ETA: A bit like noticing that elementary particles parallel a precise mathematical structure, but not yet knowing what particles are and why they do that.)
In a matter of speaking, the schema explicitly moves confusion about “what probability is” to “what possible worlds to consider”. This is implicitly done in many problems of probability — which reference class to pick for the outside view, or how to count possible outcomes —, but the schema makes this much clear. (To me, at least; I published this in case it has a similar effect on others.)
it doesn’t seem possible to rigorously define the collection of all possible worlds, for formal preference to talk about
I’m not sure I agree. The schema doesn’t say what collection of universes to use, but I don’t see why you couldn’t just define rigorously one of your choosing and use it. (If I’m missing something here please give me a link.) Note that the one you picked can be “wrong” in some sense, and thus the probabilities you obtain not be helpful, but I don’t see a reason why you can’t do it even if it’s wrong.
Interestingly, the schema does theoretically provide a potential way of noticing you picked a bad collection of worlds: If you end up with an empty E (the subset of worlds agreeing with your experiences), then you certainly picked bad.
I’m a bit fuzzy about what it means when your experiences “consistently” happen to be improbable (but never impossible) according to your calculation. In Bayesian terms, you correctly update on every experience, but your predictions keep being wrong. The schema seems correct, so either you picked a bad collection of possible worlds, or you just happen to be in a world that’s just unpredictable (in the sense that even if you pick the best decision procedure possible, you still lose; the world just doesn’t allow you to win). In the latter case it’s unsurprising — you can’t win, so it’s “normal” that you can’t find a winning strategy — but the latter case should allow you to “escape” by finding the correct set of worlds.
is related to the so-called “ontology problem” in FAI, where we need to define preference for Friendly AI based on human preference, while human preference may be defined in terms of incomplete and/or incorrect understanding of the world
Note that I intentionally didn’t mention preferences anywhere in the post. (Actually, I meant to make it explicit that they’re orthogonal to the problem, I just forgot.)
The question of preferences seems to me perfectly orthogonal to the schema. That is, if you pick a set of possible worlds, but you still can’t define preferences well, than you’re confused about preferences. If, say, you have a class of “possible world sets”, and you can rigorously define your preferences within each such set, but you can’t pick which set to use, then you’re confused about your ontology.
In other words, the schema allows the subdivision of confusion in two separate sources of confusion. It only helps in the sense that it transforms a larger problem in two smaller problems; it’s not its fault that they’re still hard.
There’s a related but more subtle point that fascinated me: even if you don’t like the schema because it’s not helpful enough, it still seems correct. No matter how else you specify your problem, if you do it correctly it will still be a special case of the schema, and you’re still going to have to face it. In a sense, how well you can solve the schema above is a bound of how rational you are.
For me it was a bit like finding out that a certain problem is NP-complete: once you know that, you can find special cases that are easier to solve but useful, and make do with them; but until your problem-solving is NP-strong, you know that you can’t solve the general case. (This doesn’t prevent CS researchers to investigate some properties of those and even harder problems.)
ETA: And the reason it was fascinating was that seeing the schema gave me a much clearer notion of how hard it is.
I don’t see a reason why you can’t do it even if it’s wrong.
If it’s wrong, then it’s not clear what exactly are we doing. If you run out of sample space, there is no way to correct this mistake, because it’s what the sample space means, options still available.
The problem in choice of sample space for Bayesian updating is the same problem as in finding the formalism for encoding a solution to the ontology problem (a preference that is no longer in danger of being defined in terms of misconceptions).
(And if there is no way to define a correct “sample space”, the framework itself is bogus, though the solution seems to be that we shouldn’t seek a set of all possible worlds, but rather a set of all possible thoughts...)
I don’t see a reason why you can’t do it even if it’s wrong.
If it’s wrong, then it’s not clear what exactly are we doing. If you run out of sample space, there is no way to correct this mistake, because it’s what the sample space means, options still available.
As I see it, if it’s wrong, then you’re calculating probabilities for a different “metaverse”. As I said, it seems to me that if you’re wrong then you should be able to notice: if you do something many times, and you calculate each time a probability 70% of result X, but you never get that X, it’s obvious you’re doing things wrong. And since, as far as I can tell, the structure of the calculation is really correct, then it must be your premise that’s wrong, i.e. which world-set (and measure) you picked to calculate in.
The case where you run out of space is just the more obvious one: it’s the world-set you used that’s wrong (because you ran out of worlds before you used the measure).
Again, my interest in this is not that it seems like a good way of calculating probabilities. It’s that it seems like the only possible way, hard or impossible as it may be.
Part of the reason for publishing this was to (hopefully) find through the comments another “formalism” that (a) seems right, (b) seems general, but (c) isn’t reducible to that schema.
It certainly is. That is why the schema above fascinated me when I made it explicit: although the concepts involved are not rigorously defined, the relationships between them (as expressed in the schema) feel rigorously correct. (ETA: A bit like noticing that elementary particles parallel a precise mathematical structure, but not yet knowing what particles are and why they do that.)
In a matter of speaking, the schema explicitly moves confusion about “what probability is” to “what possible worlds to consider”. This is implicitly done in many problems of probability — which reference class to pick for the outside view, or how to count possible outcomes —, but the schema makes this much clear. (To me, at least; I published this in case it has a similar effect on others.)
I’m not sure I agree. The schema doesn’t say what collection of universes to use, but I don’t see why you couldn’t just define rigorously one of your choosing and use it. (If I’m missing something here please give me a link.) Note that the one you picked can be “wrong” in some sense, and thus the probabilities you obtain not be helpful, but I don’t see a reason why you can’t do it even if it’s wrong.
Interestingly, the schema does theoretically provide a potential way of noticing you picked a bad collection of worlds: If you end up with an empty E (the subset of worlds agreeing with your experiences), then you certainly picked bad.
I’m a bit fuzzy about what it means when your experiences “consistently” happen to be improbable (but never impossible) according to your calculation. In Bayesian terms, you correctly update on every experience, but your predictions keep being wrong. The schema seems correct, so either you picked a bad collection of possible worlds, or you just happen to be in a world that’s just unpredictable (in the sense that even if you pick the best decision procedure possible, you still lose; the world just doesn’t allow you to win). In the latter case it’s unsurprising — you can’t win, so it’s “normal” that you can’t find a winning strategy — but the latter case should allow you to “escape” by finding the correct set of worlds.
Note that I intentionally didn’t mention preferences anywhere in the post. (Actually, I meant to make it explicit that they’re orthogonal to the problem, I just forgot.)
The question of preferences seems to me perfectly orthogonal to the schema. That is, if you pick a set of possible worlds, but you still can’t define preferences well, than you’re confused about preferences. If, say, you have a class of “possible world sets”, and you can rigorously define your preferences within each such set, but you can’t pick which set to use, then you’re confused about your ontology.
In other words, the schema allows the subdivision of confusion in two separate sources of confusion. It only helps in the sense that it transforms a larger problem in two smaller problems; it’s not its fault that they’re still hard.
There’s a related but more subtle point that fascinated me: even if you don’t like the schema because it’s not helpful enough, it still seems correct. No matter how else you specify your problem, if you do it correctly it will still be a special case of the schema, and you’re still going to have to face it. In a sense, how well you can solve the schema above is a bound of how rational you are.
For me it was a bit like finding out that a certain problem is NP-complete: once you know that, you can find special cases that are easier to solve but useful, and make do with them; but until your problem-solving is NP-strong, you know that you can’t solve the general case. (This doesn’t prevent CS researchers to investigate some properties of those and even harder problems.)
ETA: And the reason it was fascinating was that seeing the schema gave me a much clearer notion of how hard it is.
If it’s wrong, then it’s not clear what exactly are we doing. If you run out of sample space, there is no way to correct this mistake, because it’s what the sample space means, options still available.
The problem in choice of sample space for Bayesian updating is the same problem as in finding the formalism for encoding a solution to the ontology problem (a preference that is no longer in danger of being defined in terms of misconceptions).
(And if there is no way to define a correct “sample space”, the framework itself is bogus, though the solution seems to be that we shouldn’t seek a set of all possible worlds, but rather a set of all possible thoughts...)
As I see it, if it’s wrong, then you’re calculating probabilities for a different “metaverse”. As I said, it seems to me that if you’re wrong then you should be able to notice: if you do something many times, and you calculate each time a probability 70% of result X, but you never get that X, it’s obvious you’re doing things wrong. And since, as far as I can tell, the structure of the calculation is really correct, then it must be your premise that’s wrong, i.e. which world-set (and measure) you picked to calculate in.
The case where you run out of space is just the more obvious one: it’s the world-set you used that’s wrong (because you ran out of worlds before you used the measure).
Again, my interest in this is not that it seems like a good way of calculating probabilities. It’s that it seems like the only possible way, hard or impossible as it may be.
Part of the reason for publishing this was to (hopefully) find through the comments another “formalism” that (a) seems right, (b) seems general, but (c) isn’t reducible to that schema.