Okay, if you have some reason to believe that the question was chosen to have a specific answer, instead of being chosen directly from questionspace, then you can revise up. I didn’t see a reason to think this was going on when the aliens were asking the question, though.
Hmm. As you point out, questionspace is biased towards “No” when represented in human formalisms (if weighting by length, it’s biased by nearly the length of the “not” symbol), and it would seem weird if it weren’t so an an alien representation. Perhaps that’s a reason to revise down and not up when taking information off the table. But it doesn’t seem like it should be more than (say) a decibel’s worth of evidence for “No”.
ETA: I think we each just acknowledged that the other has a point. On the Internet, no less!
I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events -
Strigli wins the game.
It rains immediately after the match is over.
Strigli wins the game AND it rains immediately after the match is over.
I am not sure if your scheme ensures that this does not happen.
Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don’t understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses?
One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don’t know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can’t completely convince myself that it gives a correct prior probability.
For example, you should definitely not end up assigning equal probabilities to the following three events -
Strigli wins the game.
It rains immediately after the match is over.
Strigli wins the game AND it rains immediately after the match is over.
I am not sure if your scheme ensures that this does not happen
I just wanted to note that it is actually possible to do that, provided that the questions are asked in order (not simultaneously). That is, I might logically think that the answer to (1) and (2) is true with 50% probability after I’m asked each question. Then, when I’m asked (3), I might logically deduce that (3) is true with 50% probability — however, this only means that after I’m asked (3), the very fact that I was asked (3) caused me to raise my confidence that (1) and (2) are true. It’s a fine point that seems easy to miss.
On a somewhat related point, I’ve looked at the entire discussion and it seems to me the original question is ill-posed, in the sense that the question, with high probability, doesn’t mean what the asker thinks it means.
Take For example, let’s say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli. The question is intended to prevent you from having any prior information about its subject.
However, what it means is just that before you are asked the question, you don’t have any information about it. (And I’m not even very sure about that.) But once you are asked the question, you received a huge amount of information: The very fact that you received that question is extremely improbable (in the class of “what could have happened instead”). Also note that it is vanishingly more improbable than, say, being asked by somebody on the street, say, if you think his son will get an A today.
“Something extremely improbable happens” means “you just received information”; the more improbable it was the more information you received (though I think there are some logs in that relationship).
So, the fact you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli brings a lot of information: space travel is possible within one’s lifetime, aliens exist, aliens have that travel technology, aliens bring people to their planets, aliens can pose a question to somebody just brought to their question, they live on at least one planet, they have something they translate as “game” in English, they have names for planets, individuals, games and teams, they translate those names in some particular English-pronounceable (or -writable, depends on how the question was asked) form.
More subtly, you think that Sillpruk came to you and asked you a question; this implies you have good reason to think that the events should be interpreted as such (rather than just, say, a block of matter arrived in front of you, and it made some sounds. The class of events “aliens take you to their planets and ask you a question” is vastly larger than “the same, but you realize it”.
tl;dr: I guess what I mean is that “what priors you use for a question you have no idea about” is ill formed, because it’s pretty much logically impossible that you have no relevant information.
Definitely agree on the first point (although, to be careful, the probabilities I assign to the three events could be epsilons apart if I were convinced of a bidirectional implication between 1 and 2).
On the second part: Yep, you need to start with some prior probabilities, and if you don’t have any already, the ignorance prior of 2^{-n} for each hypothesis that can be written (in some fixed binary language) as a program of length n is the way to go. (This is basically what you described, and carrying forward from that point is called Solomonoff induction.)
In practice, it’s not possible to estimate hypothesis complexity with much precision, but it doesn’t take all that much precision to judge in cases like Thor vs. Maxwell’s Equations; and anyway, as long as your priors aren’t too ridiculously off, actually updating on evidence will correct them soon enough for most practical purposes.
Okay, if you have some reason to believe that the question was chosen to have a specific answer, instead of being chosen directly from questionspace, then you can revise up. I didn’t see a reason to think this was going on when the aliens were asking the question, though.
Hmm. As you point out, questionspace is biased towards “No” when represented in human formalisms (if weighting by length, it’s biased by nearly the length of the “not” symbol), and it would seem weird if it weren’t so an an alien representation. Perhaps that’s a reason to revise down and not up when taking information off the table. But it doesn’t seem like it should be more than (say) a decibel’s worth of evidence for “No”.
ETA: I think we each just acknowledged that the other has a point. On the Internet, no less!
Isn’t it awesome when that happens? :D
I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events -
Strigli wins the game.
It rains immediately after the match is over.
Strigli wins the game AND it rains immediately after the match is over.
I am not sure if your scheme ensures that this does not happen.
Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don’t understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses?
One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don’t know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can’t completely convince myself that it gives a correct prior probability.
I just wanted to note that it is actually possible to do that, provided that the questions are asked in order (not simultaneously). That is, I might logically think that the answer to (1) and (2) is true with 50% probability after I’m asked each question. Then, when I’m asked (3), I might logically deduce that (3) is true with 50% probability — however, this only means that after I’m asked (3), the very fact that I was asked (3) caused me to raise my confidence that (1) and (2) are true. It’s a fine point that seems easy to miss.
On a somewhat related point, I’ve looked at the entire discussion and it seems to me the original question is ill-posed, in the sense that the question, with high probability, doesn’t mean what the asker thinks it means.
Take For example, let’s say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli. The question is intended to prevent you from having any prior information about its subject.
However, what it means is just that before you are asked the question, you don’t have any information about it. (And I’m not even very sure about that.) But once you are asked the question, you received a huge amount of information: The very fact that you received that question is extremely improbable (in the class of “what could have happened instead”). Also note that it is vanishingly more improbable than, say, being asked by somebody on the street, say, if you think his son will get an A today.
“Something extremely improbable happens” means “you just received information”; the more improbable it was the more information you received (though I think there are some logs in that relationship).
So, the fact you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli brings a lot of information: space travel is possible within one’s lifetime, aliens exist, aliens have that travel technology, aliens bring people to their planets, aliens can pose a question to somebody just brought to their question, they live on at least one planet, they have something they translate as “game” in English, they have names for planets, individuals, games and teams, they translate those names in some particular English-pronounceable (or -writable, depends on how the question was asked) form.
More subtly, you think that Sillpruk came to you and asked you a question; this implies you have good reason to think that the events should be interpreted as such (rather than just, say, a block of matter arrived in front of you, and it made some sounds. The class of events “aliens take you to their planets and ask you a question” is vastly larger than “the same, but you realize it”.
tl;dr: I guess what I mean is that “what priors you use for a question you have no idea about” is ill formed, because it’s pretty much logically impossible that you have no relevant information.
Definitely agree on the first point (although, to be careful, the probabilities I assign to the three events could be epsilons apart if I were convinced of a bidirectional implication between 1 and 2).
On the second part: Yep, you need to start with some prior probabilities, and if you don’t have any already, the ignorance prior of 2^{-n} for each hypothesis that can be written (in some fixed binary language) as a program of length n is the way to go. (This is basically what you described, and carrying forward from that point is called Solomonoff induction.)
In practice, it’s not possible to estimate hypothesis complexity with much precision, but it doesn’t take all that much precision to judge in cases like Thor vs. Maxwell’s Equations; and anyway, as long as your priors aren’t too ridiculously off, actually updating on evidence will correct them soon enough for most practical purposes.
ETA: Good to keep in mind: When (Not) To Use Probabilities