You are talking about prior probability. P(Dark Lord is Death|no specific background information) roughly equals to P(Eliezer changes things from canon), which isn’t very large; so after updating both with a equally favorable piece of evidence “Death is Dark Lord” is still behind “Voldemort is Dark Lord”.
You can assign prior probabilities in various ways, and one of them is giving every hypothesis an appropriate complexity penalty (or you can just judge everything as equally likely, or give everything a simplicity penalty, or penalize every hypothesis according to how many people it affects, or...). Some ways are better than others, but:
1) Why “complexity penalty” should work in fiction, even in a rationalist fiction?
2) Why hypothesis “Voldemort is Dark Lord” is simpler than “Death is Dark Lord” in the sense of program length? One can argue that the former hypothesis points to the specific human from a pool of a 6 billion people (or 100 billion, if you want to consider every human ever lived) while the latter talks about some entity likely to be very basic from the Magic viewpoint.
1) Why “complexity penalty” should work in fiction, even in a rationalist fiction?
Because there will still be an infinite (countable) number of finite hypotheses which could be considered and only a finite amount of probability to divide among them, which necessarily implies that in the limit more complicated hypotheses will have individual probability approaching zero. This will be true in the limit even if you define ‘complexity’ differently than the person who constructed the distribution.
Is “A or B” more “complex” than “A”? It seems to me that it generally takes more bits to say “A or B”, but the prior for “A” should be smaller than for “A or B”. Is there something in the “assign prior according to complexity” heuristic that accounts for that?
So, there is Lesath Lestrange, an original character. Which is more likely: “Lesath thinks that Harry is his Lord” or “Lesath is a 3-level (or any specific number instead of “3″) player who wants to decieve Harry, and also he is H&C which is possible because he knows how to fool anti-obliviation wards”?
Your approach will just say “I don’t know what to make of it. We have already departured from the canon and I can’t work here” with a sad look on face.
EDIT: I re-read my comment, and it seems to be arrogant and condescending. I didn’t intend it to be so, and not sure how I should change it, so I figured I should just apologize beforehand. Your approach to assigning priors is reasonable one, it just lacking some vital parts.
I agree that it’s an incomplete measure. As you point out, we would need some measure of the complexity of divergences from Canon, which requires a more general measure.
Another way to put it would be, I don’t think it’s unreasonable in a fanfic to assign all the details prescribed in Canon a complexity of zero.
You are talking about prior probability. P(Dark Lord is Death|no specific background information) roughly equals to P(Eliezer changes things from canon), which isn’t very large; so after updating both with a equally favorable piece of evidence “Death is Dark Lord” is still behind “Voldemort is Dark Lord”.
You can assign prior probabilities in various ways, and one of them is giving every hypothesis an appropriate complexity penalty (or you can just judge everything as equally likely, or give everything a simplicity penalty, or penalize every hypothesis according to how many people it affects, or...). Some ways are better than others, but:
1) Why “complexity penalty” should work in fiction, even in a rationalist fiction?
2) Why hypothesis “Voldemort is Dark Lord” is simpler than “Death is Dark Lord” in the sense of program length? One can argue that the former hypothesis points to the specific human from a pool of a 6 billion people (or 100 billion, if you want to consider every human ever lived) while the latter talks about some entity likely to be very basic from the Magic viewpoint.
Hope that clears some of confusion!
Because there will still be an infinite (countable) number of finite hypotheses which could be considered and only a finite amount of probability to divide among them, which necessarily implies that in the limit more complicated hypotheses will have individual probability approaching zero. This will be true in the limit even if you define ‘complexity’ differently than the person who constructed the distribution.
Is “A or B” more “complex” than “A”? It seems to me that it generally takes more bits to say “A or B”, but the prior for “A” should be smaller than for “A or B”. Is there something in the “assign prior according to complexity” heuristic that accounts for that?
Hmm, I suppose you could judge the “complexity” of the plot of a fan fic by how much it deviated from Canon.
It’s not very useful measure.
So, there is Lesath Lestrange, an original character. Which is more likely: “Lesath thinks that Harry is his Lord” or “Lesath is a 3-level (or any specific number instead of “3″) player who wants to decieve Harry, and also he is H&C which is possible because he knows how to fool anti-obliviation wards”?
Your approach will just say “I don’t know what to make of it. We have already departured from the canon and I can’t work here” with a sad look on face.
EDIT: I re-read my comment, and it seems to be arrogant and condescending. I didn’t intend it to be so, and not sure how I should change it, so I figured I should just apologize beforehand. Your approach to assigning priors is reasonable one, it just lacking some vital parts.
I agree that it’s an incomplete measure. As you point out, we would need some measure of the complexity of divergences from Canon, which requires a more general measure.
Another way to put it would be, I don’t think it’s unreasonable in a fanfic to assign all the details prescribed in Canon a complexity of zero.
This seems reasonable indeed.
(if you are interested, the thing you are pointing at is conditional Kolmogorov complexity)