I have two basic questions that I am confused about. This is probably a good place to ask them.
What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let’s say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.
Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probability to ‘yes’ and a probability to ‘no’. What’s the smallest sequence of questions you can ask him to decide for sure that a) he is not a rationalist, b) he is not a Bayesian?
This is somewhat similar to the question I asked in Reacting to Inadequate Data. It was hit with a −3 rating though… so apparently it wasn’t too useful.
The consensus of the comments was that the correct answer is .5.
What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be?
If you truly have no clue, .5 yes and .5 no.
For example, let’s say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.
Ah, but here you have some clues, which you should update on, and knowing how is much trickier. One clue is that the unkown game of Doldun could possibly have more than 2 teams competing, of which only 1 could win, and this should shift the probabilities in favor of “No”. How much? Well that depends on your probability distribution for an unknown game to have n competing teams. Of course, there may be other clues that should shift the probabilty towards “yes”.
But the game of Doldun could also have the possibility of cooperative wins. Or it could be unwinnable. Or Strigli might not be playing. Or Strigli might be the only team playing—it’s the team against the environment! Or Doldun could be called on account of a rain of frogs. Or Strigli’s left running foobar could break a chitinous armor plate and be replaced by a member of team Baz, which means that Baz gets half credit for a Strigli win.
All of which means that you shouldn’t be too confident in your probability distribution in such a foreign situation, but you still have to come up with a probability if it’s relevant at all for action. Bad priors can hurt, but refusal to treat your uncertainty in a Bayes-like fashion hurts more (with high probability).
Yes, but in this situation you have so little information that .5 doesn’t seem remotely cautious enough. You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year—does it look obvious that they shouldn’t say 50% in that case? .5 isn’t the right prior—some eensy prior that any given possibly-made-up alien thing will happen, adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.
You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year—does it look obvious that they shouldn’t say 50% in that case?
Unless there’s some reason that they’d suspect it’s more likely for us to ask them a trick question whose answer is “No” than one whose question is “Yes” (although it is probably easier to create trick questions whose answer is “No”, and the Striglian could take that into account), 50% isn’t a bad probability to assign if asked a completely foreign Yes-No question.
Basically, I think that this and the other problems of this nature discussed on LW are instances of the same phenomenon: when the space of possibilities (for alien culture, Omega’s decision algorithm, etc) grows so large and so convoluted as to be utterly intractable for us, our posterior probabilities should be basically our ignorance priors all over again.
It seems to me that even if you know that there is a Doldun game, played by exactly two teams, of which one is Strigli, which game exactly one team will entirely win, 50% is as high as you should go. If you don’t have that much precise information, then 50% is an extremely generous upper bound for how likely you should consider a Strigli win. The space of all meaningful false propositions is hugely larger than the space of all meaningful true propositions. For every proposition that is true, you can also contradict it directly, and then present a long list of indirectly contradictory statements. For example: it is true that I am sitting on a blue couch. It is false that I am not on a blue couch—and also false that I am on a red couch, false that I am trapped in carbonite, false that I am beneath the Great Barrier Reef, false that I’m in the Sea of Tranquility, false that I’m equidistant between the Sun and the star Polaris, false that… Basically, most statements you can make about my location are false, and therefore the correct answer to most yes-or-no questions you could ask about my location is “no”.
Basically, your prior should be that everything is almost certainly false!
No, just a random alien that (1) I encountered and (2) asked me a question.
The two conditions above restrict enormously the general class of “possible” random aliens. Every condition that restricts possibilities brings information, though I can’t see a way of properly encoding this information as a prior about the answer to said question.
[ETA:] Note that I don’t necessarily accept cousin_it’s assertion, I just state my interpretation of it.
Well, let’s say I ask you whether all “fnynznaqre”s are “nzcuvovna”s. Prior to using rot13 on this question (and hypothesizing that we hadn’t had this particular conversation beforehand), would your prior really be as low as your previous comment implies?
(Of course, it should probably still be under 50% for the reference class we’re discussing, but not nearly that far under.)
Given that you chose this question to ask, and that I know you are a human, then screening off this conversation I find myself hovering at around 25% that all “fnynznaqre”s are “nzcuvovna”s. We’re talking about aliens. Come on, now that it’s occurred to you, wouldn’t you ask an E.T. if it thinks the Red Sox have a shot at the spelling bee?
Yes, but I might as easily choose a question whose answer was “Yes” if I thought that a trick question might be too predictable of a strategy.
1⁄4 seems reasonable to me, given human psychology. If you expand the reference class to all alien species, though, I can’t see why the likelihood of “Yes” should go down— that would generally require more information, not less, about what sort of questions the other is liable to ask.
Okay, if you have some reason to believe that the question was chosen to have a specific answer, instead of being chosen directly from questionspace, then you can revise up. I didn’t see a reason to think this was going on when the aliens were asking the question, though.
Hmm. As you point out, questionspace is biased towards “No” when represented in human formalisms (if weighting by length, it’s biased by nearly the length of the “not” symbol), and it would seem weird if it weren’t so an an alien representation. Perhaps that’s a reason to revise down and not up when taking information off the table. But it doesn’t seem like it should be more than (say) a decibel’s worth of evidence for “No”.
ETA: I think we each just acknowledged that the other has a point. On the Internet, no less!
I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events -
Strigli wins the game.
It rains immediately after the match is over.
Strigli wins the game AND it rains immediately after the match is over.
I am not sure if your scheme ensures that this does not happen.
Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don’t understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses?
One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don’t know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can’t completely convince myself that it gives a correct prior probability.
For example, you should definitely not end up assigning equal probabilities to the following three events -
Strigli wins the game.
It rains immediately after the match is over.
Strigli wins the game AND it rains immediately after the match is over.
I am not sure if your scheme ensures that this does not happen
I just wanted to note that it is actually possible to do that, provided that the questions are asked in order (not simultaneously). That is, I might logically think that the answer to (1) and (2) is true with 50% probability after I’m asked each question. Then, when I’m asked (3), I might logically deduce that (3) is true with 50% probability — however, this only means that after I’m asked (3), the very fact that I was asked (3) caused me to raise my confidence that (1) and (2) are true. It’s a fine point that seems easy to miss.
On a somewhat related point, I’ve looked at the entire discussion and it seems to me the original question is ill-posed, in the sense that the question, with high probability, doesn’t mean what the asker thinks it means.
Take For example, let’s say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli. The question is intended to prevent you from having any prior information about its subject.
However, what it means is just that before you are asked the question, you don’t have any information about it. (And I’m not even very sure about that.) But once you are asked the question, you received a huge amount of information: The very fact that you received that question is extremely improbable (in the class of “what could have happened instead”). Also note that it is vanishingly more improbable than, say, being asked by somebody on the street, say, if you think his son will get an A today.
“Something extremely improbable happens” means “you just received information”; the more improbable it was the more information you received (though I think there are some logs in that relationship).
So, the fact you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli brings a lot of information: space travel is possible within one’s lifetime, aliens exist, aliens have that travel technology, aliens bring people to their planets, aliens can pose a question to somebody just brought to their question, they live on at least one planet, they have something they translate as “game” in English, they have names for planets, individuals, games and teams, they translate those names in some particular English-pronounceable (or -writable, depends on how the question was asked) form.
More subtly, you think that Sillpruk came to you and asked you a question; this implies you have good reason to think that the events should be interpreted as such (rather than just, say, a block of matter arrived in front of you, and it made some sounds. The class of events “aliens take you to their planets and ask you a question” is vastly larger than “the same, but you realize it”.
tl;dr: I guess what I mean is that “what priors you use for a question you have no idea about” is ill formed, because it’s pretty much logically impossible that you have no relevant information.
Definitely agree on the first point (although, to be careful, the probabilities I assign to the three events could be epsilons apart if I were convinced of a bidirectional implication between 1 and 2).
On the second part: Yep, you need to start with some prior probabilities, and if you don’t have any already, the ignorance prior of 2^{-n} for each hypothesis that can be written (in some fixed binary language) as a program of length n is the way to go. (This is basically what you described, and carrying forward from that point is called Solomonoff induction.)
In practice, it’s not possible to estimate hypothesis complexity with much precision, but it doesn’t take all that much precision to judge in cases like Thor vs. Maxwell’s Equations; and anyway, as long as your priors aren’t too ridiculously off, actually updating on evidence will correct them soon enough for most practical purposes.
Yes, you have made a convincing argument, I think, that given that a proposition does not involve negation, as in the alien’s question, that it is more likely to be false than true. (At least, if you have a prior for being presented with questions that penalize complexity. The sizes of the spaces of true and false propositions, however, are the same countable infinity.) (Sometimes I see claims in isolation, and so miss that a slightly modified claim is more correct and still supports the same larger claim.)
ETA: We should also note the absence of any disjunctions. It is also true that Alicorn is sitting on a blue couch or a red couch. (Well, maybe not, some time has passed since she reported sitting on a blue couch. But that’s not the point.)
This effect may be screened off if, for example, you have a prior that the aliens first choose whether the answer should be yes or no, and then choose a question to match the answer.
This is, perhaps, a necessary condition but not a sufficient one. It is true of almost all hobbies, but I wouldn’t classify hobbies such as computer programming or learning to play the piano as games.
I wouldn’t class most hobbies as attempts to overcome unnecessary obstacles either—certainly not playing a musical instrument, where the difficulties are all necessary ones. I might count bird-watching, of the sort where the twitcher’s goal is to get as many “ticks” (sightings of different species) as possible, as falling within the definition, but for that very reason I’d regard it as being a game.
One could argue that compulsory games at school are a counterexample to the “voluntary” part. On the other hand, Láadan has a word “rashida”: “a non-game, a cruel “playing” that is a game only for the dominant “player” with the power to force others to participate [ra=non- + shida=game]”. In the light of that concept, perhaps these are not really games for the children forced to participate.
But whatever nits one can pick in Bernard Suits’ definition, I still think it makes a pretty good counter to Wittgenstein’s claims about the concept.
I wouldn’t class most hobbies as attempts to overcome unnecessary obstacles either—certainly not playing a musical instrument, where the difficulties are all necessary ones.
Oh, right. Reading “unnecessary” as “artificial”, the definition is indeed as good as they come. My first interpretation was somewhat different and, in retrospect, not very coherent.
“The sense of a sentence—one would like to say—may, of course, leave this or that open, but the sentence must nevertheless have a definite sense. An indefinite sense—that would really not be a sense at all. - This is like: An indefinite boundary is not really a boundary at all. Here one thinks perhaps: if I say ‘I have locked the man up fast in the room—there is only one door left open’ - then I simply haven’t locked him in at all; his being locked in is a sham. One would be inclined to say here: ‘You haven’t done anything at all’. An enclosure with a hole in it is as good as none. - But is that true?”
Simply Googling it would not have signaled any disappointment radical_negative_one may have had that you did not include a citation (preferably with a relevant link) as is normal when making a quote like that.
/me bats the social signal into JGWeissman’s court
Omitting the citation, which wasn’t really needed, sends the message that I don’t wish to stand on Wittgenstein’s authority but think the sentiment stands on its own.
If it doesn’t stand on its own, you shouldn’t quote it at all—the purpose of the citation is to allow interested parties to investigate the original source, not to help you convince.
Voted up, but I would say the purpose is to do both, to help convince and help further investigation, and more, such as to give credit to the source. Citations benifet the reader, the quoter, and the source.
I definitely agree that willingness to forgo your own benifet as the quoter does not justify ignoring the benifets to the others involved.
If you can’t see the difference between Wittgenstein making an argument about what our intuitions about meaning and precision say and hard technical scientific arguments—like your ‘Argument screens’ link—nor how knowing the quote is by Wittgenstein could distort one’s own introspection & thinking about the former argument, while it would not do so about the latter, then I will just have to accept my downvotes in silence because further dialogue is useless.
I voted up RobinZ’s comment for the link to Beware Trivial Inconveniences.
Since his polite attempt is not getting through to you, I will be more explicit:
You do not have sufficient status to get away with violating group norms regarding the citations of quotes. Rather than signaling that you are confident in your status, and have greater wisdom about the value of citations, it actually signals that you are less valuable to the group, in part as a result of your lack of loyalty, and that your behavior reflects poorly on the group. On net, this lowers your status.
I’d prefer to resort to (linguistic) pragmatics. RNO made a straightforward and polite request. Then gwern granted the request but planted a subtle barb at the same time (roughly “You should have Googled it”). That was rude. We can only speculate on the reasons for being rude (e.g. past exchanges with RNO). Instead of acknowledging the rudeness and apologizing gracefully gwern is defending the initial behaviour. Both the rudeness and the defensiveness run counter to this site’s norms. My prediction is further downvotes if this continues (and apparently gwern agrees!).
“Status”, in this case, seems once again to be a non-load-bearing term.
“Status”, in this case, seems once again to be a non-load-bearing term.
I don’t think this is fair as a criticism of my analysis, as the details I gave indicate how I cash out “status” at a lower level of abstraction.The explanatory power of the term in this case is that people have an expectation that with enough status, they can get away with violating group norms (and demonstrating this augments status), and Gwern seems to (falsely) think he(?) has sufficient status to get away with violating this norm. (Really, this norm is important to this group and I don’t believe anyone has enough status to get away with violating it here.)
I realize we have had some confused posts about status lately, including the one your linked comment responds to, but that doesn’t make it wrong to use the word to refer to a summary of a person’s value and power within a group, and other group members’ perceptions of these attributes.
Also note, I did not write that comment to explain to others what is going on, but to get Gwern to conform to what I believe is an important group norm.
Mind you, I have no particular interest in this minor dispute about sourcing quotes. By and large I prefer to see quotes with a source.
I am (perhaps unwisely) acting on my frustration at one more use of the term “status” that has increased my confusion, while my requests for clarification have gone without response, and thus opportunistically linking an unrelated thread to those requests.
The explanatory power of the term in this case is that people have an expectation
I do not have privileged access to gwern’s expectations, I can only infer them in very roundabout ways from gwern’s behaviour. I would regard with extreme caution an “explanation” that referred to someone’s mental state, without at least a report by that person of their mental state. The short-hand I use for this mistake is “mind-reading”.
Maybe if gwern had come out and said “I have 1337(+) karma, punk. I can quote without sourcing if I want to”, I’d be more sympathetic to your use of the term “status”. But gwern didn’t, and in fact gave a reason for not sourcing, so he would be justified in saying something like “Argument screens off status” in response to your claims.
You could just as well have told gwern, “This community has a norm of sourcing quotes. I note your argument that this norm would detract from the value of the quotes by appearing to appeal to authority. I reject the argument, and additionally I think you’re being a jerk.”
(+) Not numerically correct, but close enough that I couldn’t resist the pun.
I did reject the argument, or at least agreed with RobinZ in rejecting the argument. I made the point about “This community has a norm of sourcing quotes.” I won’t just bluntly say “I think you’re being a jerk.” as “jerk” is an inflammatory uninformative term.
It seems to me like you are objecting to my practical use of a theory because you don’t understand it, and because other people have written low quality posts about it (which I criticized). Maybe you should go read a high quality post about it.
I suspect my analysis differs from yours—for one, I read in RNO’s request a similar barb: roughly, “You should have included a source when you posted a quote.” JGW initial comment noted the presence of this—RNO’s—barb, whereupon gwern acknowledged the existence of a disagreement by arguing explicitly for his position. In fact, the first post in his argument is at positive karma—I suspect because it is a valid point, despite being in opposition to the norm.
I would not be so hasty to dismiss JGW’s analysis.
The status part seems to come from an assumption that Eliezer or someone else could have gotten away with it. That assumption may be wrong. I think your interpretation is better.
adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.
Hm. For actual aliens I don’t think even that’s justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.
I was conditioning on the probability that the question is in fact meaningful to the aliens (more like “Will the Red Sox win the spelling bee?” than like “Does the present king of France’s beard undertake differential diagnosis of the psychiatric maladies of silk orchids with the help of a burrowing hybrid car?”). If you assume they’re just stringing words together, then there’s not obviously a proposition you can even assign probability to.
Hey, maybe they’re Zen aliens who always greet strangers by asking meaningless questions.
More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.
For #2, I don’t see how you could ever be completely sure the other was rationalist or Bayesian, short of getting their source code; they could always have one irrational belief hiding somewhere far from all the questions you can think up.
In practice, though, I think I could easily decide within 10 questions whether a given (honest) answerer is in the “aspiring rationalist” cluster and/or the “Bayesian” cluster, and get the vast majority of cases right. People cluster themselves pretty well on many questions.
For two, can I just have an extended preface that describes a population, an infection rate for some disease and a test with false positivity rates and false negativity rates and see if the person gives me the right answer?
For number 1 you should weight “no” more highly. For the answer to be “yes” Strigli must be a team, a Doldun team, and it must win. Sure, maybe all teams win, but it is possible that all teams could lose, they could tie, or the game might be cancelled, so a “no” is significantly more likely to be right.
1: If you have no information to support either alternative more than the other, you should assign them both equal credence. So, fifty-fifty. Note that yes-no questions are the easiest possible case, as you have exactly two options. Things get much trickier once it’s not obvious what things should be classified as the alternatives that should be considered equally plausible.
Though I would say that in this situation, the most rational approach would be to tell the Sillpruk, “I’m sorry, I’m not from around here. Before I answer, does this planet have a custom of killing people who give the wrong answer to this question, or is there anything else I should be aware of before replying?”
2: This depends a lot how we define a rationalist and a Bayesian. A question like “is the Bible literally true” could reveal a lot of irrational people, but I’m not certain of the amount of questions that’d need to be asked before we could know for sure that they were irrational. (Well, since 1 and 0 aren’t probabilities, the strict answer to this question is “it can’t be done”, but I’m assuming you mean “before we know with such a certainty that in practice we can say it’s for sure”.)
So let’s say the following are the first three questions you ask and their answers -
Q1. Do you think A is true? A. Yes.
Q2. Do you think A=>B is true? A. Yes.
Q3. Do you think B is true? A. No.
At this point, will you conclude that the person you are talking to is not rational? Or will you first want to ask him the following question.
Q4. Do you believe in Modus Ponens?
or in other words,
Q4. Do you think that if A and A=>B are both true then B should also be true?
If you think you should ask this question before deciding whether the person is rational or not, then why stop here? You should continue and ask him the following question as well.
Q5. Do you think that if you believe in Modus Ponens and if you also think that A and A=>B are true, then you should also believe that B is true as well?
And I can go on and on...
So the point is, if you think asking all these questions is necessary to decide whether the person is rational or not, then in effect any given person can have any arbitrary set of beliefs and he can still claim to be rational by adding a few extra beliefs to his belief system that say the n^th level of “Modus Ponens is wrong” for some suitably chosen n.
I think that belief in modus ponens is a part of the definition of “rational”, at least practically. So Q1 is enough. However, there are not much tortoises among the general public, so this type of question isn’t probably much helpful.
I have two basic questions that I am confused about. This is probably a good place to ask them.
What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let’s say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.
Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probability to ‘yes’ and a probability to ‘no’. What’s the smallest sequence of questions you can ask him to decide for sure that a) he is not a rationalist, b) he is not a Bayesian?
This is somewhat similar to the question I asked in Reacting to Inadequate Data. It was hit with a −3 rating though… so apparently it wasn’t too useful.
The consensus of the comments was that the correct answer is .5.
Also of note is Bead Jar Guesses and its sequel.
If you truly have no clue, .5 yes and .5 no.
Ah, but here you have some clues, which you should update on, and knowing how is much trickier. One clue is that the unkown game of Doldun could possibly have more than 2 teams competing, of which only 1 could win, and this should shift the probabilities in favor of “No”. How much? Well that depends on your probability distribution for an unknown game to have n competing teams. Of course, there may be other clues that should shift the probabilty towards “yes”.
But the game of Doldun could also have the possibility of cooperative wins. Or it could be unwinnable. Or Strigli might not be playing. Or Strigli might be the only team playing—it’s the team against the environment! Or Doldun could be called on account of a rain of frogs. Or Strigli’s left running foobar could break a chitinous armor plate and be replaced by a member of team Baz, which means that Baz gets half credit for a Strigli win.
All of which means that you shouldn’t be too confident in your probability distribution in such a foreign situation, but you still have to come up with a probability if it’s relevant at all for action. Bad priors can hurt, but refusal to treat your uncertainty in a Bayes-like fashion hurts more (with high probability).
Yes, but in this situation you have so little information that .5 doesn’t seem remotely cautious enough. You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year—does it look obvious that they shouldn’t say 50% in that case? .5 isn’t the right prior—some eensy prior that any given possibly-made-up alien thing will happen, adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.
Unless there’s some reason that they’d suspect it’s more likely for us to ask them a trick question whose answer is “No” than one whose question is “Yes” (although it is probably easier to create trick questions whose answer is “No”, and the Striglian could take that into account), 50% isn’t a bad probability to assign if asked a completely foreign Yes-No question.
Basically, I think that this and the other problems of this nature discussed on LW are instances of the same phenomenon: when the space of possibilities (for alien culture, Omega’s decision algorithm, etc) grows so large and so convoluted as to be utterly intractable for us, our posterior probabilities should be basically our ignorance priors all over again.
It seems to me that even if you know that there is a Doldun game, played by exactly two teams, of which one is Strigli, which game exactly one team will entirely win, 50% is as high as you should go. If you don’t have that much precise information, then 50% is an extremely generous upper bound for how likely you should consider a Strigli win. The space of all meaningful false propositions is hugely larger than the space of all meaningful true propositions. For every proposition that is true, you can also contradict it directly, and then present a long list of indirectly contradictory statements. For example: it is true that I am sitting on a blue couch. It is false that I am not on a blue couch—and also false that I am on a red couch, false that I am trapped in carbonite, false that I am beneath the Great Barrier Reef, false that I’m in the Sea of Tranquility, false that I’m equidistant between the Sun and the star Polaris, false that… Basically, most statements you can make about my location are false, and therefore the correct answer to most yes-or-no questions you could ask about my location is “no”.
Basically, your prior should be that everything is almost certainly false!
The odds of a random sentence being true are low, but the odds of the alien choosing to give you a true sentence are higher.
A random alien?
No, just a random alien that (1) I encountered and (2) asked me a question.
The two conditions above restrict enormously the general class of “possible” random aliens. Every condition that restricts possibilities brings information, though I can’t see a way of properly encoding this information as a prior about the answer to said question.
[ETA:] Note that I don’t necessarily accept cousin_it’s assertion, I just state my interpretation of it.
Well, let’s say I ask you whether all “fnynznaqre”s are “nzcuvovna”s. Prior to using rot13 on this question (and hypothesizing that we hadn’t had this particular conversation beforehand), would your prior really be as low as your previous comment implies?
(Of course, it should probably still be under 50% for the reference class we’re discussing, but not nearly that far under.)
Given that you chose this question to ask, and that I know you are a human, then screening off this conversation I find myself hovering at around 25% that all “fnynznaqre”s are “nzcuvovna”s. We’re talking about aliens. Come on, now that it’s occurred to you, wouldn’t you ask an E.T. if it thinks the Red Sox have a shot at the spelling bee?
Yes, but I might as easily choose a question whose answer was “Yes” if I thought that a trick question might be too predictable of a strategy.
1⁄4 seems reasonable to me, given human psychology. If you expand the reference class to all alien species, though, I can’t see why the likelihood of “Yes” should go down— that would generally require more information, not less, about what sort of questions the other is liable to ask.
Okay, if you have some reason to believe that the question was chosen to have a specific answer, instead of being chosen directly from questionspace, then you can revise up. I didn’t see a reason to think this was going on when the aliens were asking the question, though.
Hmm. As you point out, questionspace is biased towards “No” when represented in human formalisms (if weighting by length, it’s biased by nearly the length of the “not” symbol), and it would seem weird if it weren’t so an an alien representation. Perhaps that’s a reason to revise down and not up when taking information off the table. But it doesn’t seem like it should be more than (say) a decibel’s worth of evidence for “No”.
ETA: I think we each just acknowledged that the other has a point. On the Internet, no less!
Isn’t it awesome when that happens? :D
I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events -
Strigli wins the game.
It rains immediately after the match is over.
Strigli wins the game AND it rains immediately after the match is over.
I am not sure if your scheme ensures that this does not happen.
Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don’t understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses?
One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don’t know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can’t completely convince myself that it gives a correct prior probability.
I just wanted to note that it is actually possible to do that, provided that the questions are asked in order (not simultaneously). That is, I might logically think that the answer to (1) and (2) is true with 50% probability after I’m asked each question. Then, when I’m asked (3), I might logically deduce that (3) is true with 50% probability — however, this only means that after I’m asked (3), the very fact that I was asked (3) caused me to raise my confidence that (1) and (2) are true. It’s a fine point that seems easy to miss.
On a somewhat related point, I’ve looked at the entire discussion and it seems to me the original question is ill-posed, in the sense that the question, with high probability, doesn’t mean what the asker thinks it means.
Take For example, let’s say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli. The question is intended to prevent you from having any prior information about its subject.
However, what it means is just that before you are asked the question, you don’t have any information about it. (And I’m not even very sure about that.) But once you are asked the question, you received a huge amount of information: The very fact that you received that question is extremely improbable (in the class of “what could have happened instead”). Also note that it is vanishingly more improbable than, say, being asked by somebody on the street, say, if you think his son will get an A today.
“Something extremely improbable happens” means “you just received information”; the more improbable it was the more information you received (though I think there are some logs in that relationship).
So, the fact you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli brings a lot of information: space travel is possible within one’s lifetime, aliens exist, aliens have that travel technology, aliens bring people to their planets, aliens can pose a question to somebody just brought to their question, they live on at least one planet, they have something they translate as “game” in English, they have names for planets, individuals, games and teams, they translate those names in some particular English-pronounceable (or -writable, depends on how the question was asked) form.
More subtly, you think that Sillpruk came to you and asked you a question; this implies you have good reason to think that the events should be interpreted as such (rather than just, say, a block of matter arrived in front of you, and it made some sounds. The class of events “aliens take you to their planets and ask you a question” is vastly larger than “the same, but you realize it”.
tl;dr: I guess what I mean is that “what priors you use for a question you have no idea about” is ill formed, because it’s pretty much logically impossible that you have no relevant information.
Definitely agree on the first point (although, to be careful, the probabilities I assign to the three events could be epsilons apart if I were convinced of a bidirectional implication between 1 and 2).
On the second part: Yep, you need to start with some prior probabilities, and if you don’t have any already, the ignorance prior of 2^{-n} for each hypothesis that can be written (in some fixed binary language) as a program of length n is the way to go. (This is basically what you described, and carrying forward from that point is called Solomonoff induction.)
In practice, it’s not possible to estimate hypothesis complexity with much precision, but it doesn’t take all that much precision to judge in cases like Thor vs. Maxwell’s Equations; and anyway, as long as your priors aren’t too ridiculously off, actually updating on evidence will correct them soon enough for most practical purposes.
ETA: Good to keep in mind: When (Not) To Use Probabilities
But it is true that you are not on a red couch.
Negation is a one-to-one map between true and false propositions.
Since you can understand the alien’s question except for the nouns, presumably you’d be able to tell if there was a “not” in there?
Yes, you have made a convincing argument, I think, that given that a proposition does not involve negation, as in the alien’s question, that it is more likely to be false than true. (At least, if you have a prior for being presented with questions that penalize complexity. The sizes of the spaces of true and false propositions, however, are the same countable infinity.) (Sometimes I see claims in isolation, and so miss that a slightly modified claim is more correct and still supports the same larger claim.)
ETA: We should also note the absence of any disjunctions. It is also true that Alicorn is sitting on a blue couch or a red couch. (Well, maybe not, some time has passed since she reported sitting on a blue couch. But that’s not the point.)
This effect may be screened off if, for example, you have a prior that the aliens first choose whether the answer should be yes or no, and then choose a question to match the answer.
That the aliens chose to translate their word as the English ‘game’ says, I think, a lot.
“Game” is one of the most notorious words in the language for the virtual impossibility of providing a unified definition absent counterexamples.
“A game is a voluntary attempt to overcome unnecessary obstacles.”
This is, perhaps, a necessary condition but not a sufficient one. It is true of almost all hobbies, but I wouldn’t classify hobbies such as computer programming or learning to play the piano as games.
I wouldn’t class most hobbies as attempts to overcome unnecessary obstacles either—certainly not playing a musical instrument, where the difficulties are all necessary ones. I might count bird-watching, of the sort where the twitcher’s goal is to get as many “ticks” (sightings of different species) as possible, as falling within the definition, but for that very reason I’d regard it as being a game.
One could argue that compulsory games at school are a counterexample to the “voluntary” part. On the other hand, Láadan has a word “rashida”: “a non-game, a cruel “playing” that is a game only for the dominant “player” with the power to force others to participate [ra=non- + shida=game]”. In the light of that concept, perhaps these are not really games for the children forced to participate.
But whatever nits one can pick in Bernard Suits’ definition, I still think it makes a pretty good counter to Wittgenstein’s claims about the concept.
Oh, right. Reading “unnecessary” as “artificial”, the definition is indeed as good as they come. My first interpretation was somewhat different and, in retrospect, not very coherent.
A family resemblance is still a resemblance.
Could you include a source for this quote, please?
Googling it would’ve told you that it’s from Wittgenstein’s Philosophical Investigations.
Simply Googling it would not have signaled any disappointment radical_negative_one may have had that you did not include a citation (preferably with a relevant link) as is normal when making a quote like that.
/me bats the social signal into JGWeissman’s court
Omitting the citation, which wasn’t really needed, sends the message that I don’t wish to stand on Wittgenstein’s authority but think the sentiment stands on its own.
Then use your own words. Wittgenstein’s are barely readable.
My words are barely readable? Did you mean Wittgenstein’s words?
Pardon me I meant Wittgenstein.
If it doesn’t stand on its own, you shouldn’t quote it at all—the purpose of the citation is to allow interested parties to investigate the original source, not to help you convince.
Voted up, but I would say the purpose is to do both, to help convince and help further investigation, and more, such as to give credit to the source. Citations benifet the reader, the quoter, and the source.
I definitely agree that willingness to forgo your own benifet as the quoter does not justify ignoring the benifets to the others involved.
You’re right, of course.
If he couldn’t even ‘investigate’ one Google search, then he’s not going to get a whole lot out of knowing it’s Wittgenstein’s PI.
Arguments from authority are inductively valid, much like ad hominems...
Argument screens off authority. And a Google search is inconvenient.
Please source your quotes. Thank you.
If you can’t see the difference between Wittgenstein making an argument about what our intuitions about meaning and precision say and hard technical scientific arguments—like your ‘Argument screens’ link—nor how knowing the quote is by Wittgenstein could distort one’s own introspection & thinking about the former argument, while it would not do so about the latter, then I will just have to accept my downvotes in silence because further dialogue is useless.
I voted up RobinZ’s comment for the link to Beware Trivial Inconveniences.
Since his polite attempt is not getting through to you, I will be more explicit:
You do not have sufficient status to get away with violating group norms regarding the citations of quotes. Rather than signaling that you are confident in your status, and have greater wisdom about the value of citations, it actually signals that you are less valuable to the group, in part as a result of your lack of loyalty, and that your behavior reflects poorly on the group. On net, this lowers your status.
Knock it off.
I am growing ever more suspicious of this “status” term.
I’d prefer to resort to (linguistic) pragmatics. RNO made a straightforward and polite request. Then gwern granted the request but planted a subtle barb at the same time (roughly “You should have Googled it”). That was rude. We can only speculate on the reasons for being rude (e.g. past exchanges with RNO). Instead of acknowledging the rudeness and apologizing gracefully gwern is defending the initial behaviour. Both the rudeness and the defensiveness run counter to this site’s norms. My prediction is further downvotes if this continues (and apparently gwern agrees!).
“Status”, in this case, seems once again to be a non-load-bearing term.
I don’t think this is fair as a criticism of my analysis, as the details I gave indicate how I cash out “status” at a lower level of abstraction.The explanatory power of the term in this case is that people have an expectation that with enough status, they can get away with violating group norms (and demonstrating this augments status), and Gwern seems to (falsely) think he(?) has sufficient status to get away with violating this norm. (Really, this norm is important to this group and I don’t believe anyone has enough status to get away with violating it here.)
I realize we have had some confused posts about status lately, including the one your linked comment responds to, but that doesn’t make it wrong to use the word to refer to a summary of a person’s value and power within a group, and other group members’ perceptions of these attributes.
Also note, I did not write that comment to explain to others what is going on, but to get Gwern to conform to what I believe is an important group norm.
Mind you, I have no particular interest in this minor dispute about sourcing quotes. By and large I prefer to see quotes with a source.
I am (perhaps unwisely) acting on my frustration at one more use of the term “status” that has increased my confusion, while my requests for clarification have gone without response, and thus opportunistically linking an unrelated thread to those requests.
I do not have privileged access to gwern’s expectations, I can only infer them in very roundabout ways from gwern’s behaviour. I would regard with extreme caution an “explanation” that referred to someone’s mental state, without at least a report by that person of their mental state. The short-hand I use for this mistake is “mind-reading”.
Maybe if gwern had come out and said “I have 1337(+) karma, punk. I can quote without sourcing if I want to”, I’d be more sympathetic to your use of the term “status”. But gwern didn’t, and in fact gave a reason for not sourcing, so he would be justified in saying something like “Argument screens off status” in response to your claims.
You could just as well have told gwern, “This community has a norm of sourcing quotes. I note your argument that this norm would detract from the value of the quotes by appearing to appeal to authority. I reject the argument, and additionally I think you’re being a jerk.”
(+) Not numerically correct, but close enough that I couldn’t resist the pun.
I think gwern just might be more subtle than a paperclip maximizer.
I did reject the argument, or at least agreed with RobinZ in rejecting the argument. I made the point about “This community has a norm of sourcing quotes.” I won’t just bluntly say “I think you’re being a jerk.” as “jerk” is an inflammatory uninformative term.
It seems to me like you are objecting to my practical use of a theory because you don’t understand it, and because other people have written low quality posts about it (which I criticized). Maybe you should go read a high quality post about it.
I suspect my analysis differs from yours—for one, I read in RNO’s request a similar barb: roughly, “You should have included a source when you posted a quote.” JGW initial comment noted the presence of this—RNO’s—barb, whereupon gwern acknowledged the existence of a disagreement by arguing explicitly for his position. In fact, the first post in his argument is at positive karma—I suspect because it is a valid point, despite being in opposition to the norm.
I would not be so hasty to dismiss JGW’s analysis.
The status part seems to come from an assumption that Eliezer or someone else could have gotten away with it. That assumption may be wrong. I think your interpretation is better.
There are rational reasons to hesitate more before harshing the behaviour of people you trust more—you are more likely to be mistaken.
Hm. For actual aliens I don’t think even that’s justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.
I was conditioning on the probability that the question is in fact meaningful to the aliens (more like “Will the Red Sox win the spelling bee?” than like “Does the present king of France’s beard undertake differential diagnosis of the psychiatric maladies of silk orchids with the help of a burrowing hybrid car?”). If you assume they’re just stringing words together, then there’s not obviously a proposition you can even assign probability to.
Hey, maybe they’re Zen aliens who always greet strangers by asking meaningless questions.
More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.
For #2, I don’t see how you could ever be completely sure the other was rationalist or Bayesian, short of getting their source code; they could always have one irrational belief hiding somewhere far from all the questions you can think up.
In practice, though, I think I could easily decide within 10 questions whether a given (honest) answerer is in the “aspiring rationalist” cluster and/or the “Bayesian” cluster, and get the vast majority of cases right. People cluster themselves pretty well on many questions.
For two, can I just have an extended preface that describes a population, an infection rate for some disease and a test with false positivity rates and false negativity rates and see if the person gives me the right answer?
For number 1 you should weight “no” more highly. For the answer to be “yes” Strigli must be a team, a Doldun team, and it must win. Sure, maybe all teams win, but it is possible that all teams could lose, they could tie, or the game might be cancelled, so a “no” is significantly more likely to be right.
50% seems wrong to me.
1: If you have no information to support either alternative more than the other, you should assign them both equal credence. So, fifty-fifty. Note that yes-no questions are the easiest possible case, as you have exactly two options. Things get much trickier once it’s not obvious what things should be classified as the alternatives that should be considered equally plausible.
Though I would say that in this situation, the most rational approach would be to tell the Sillpruk, “I’m sorry, I’m not from around here. Before I answer, does this planet have a custom of killing people who give the wrong answer to this question, or is there anything else I should be aware of before replying?”
2: This depends a lot how we define a rationalist and a Bayesian. A question like “is the Bible literally true” could reveal a lot of irrational people, but I’m not certain of the amount of questions that’d need to be asked before we could know for sure that they were irrational. (Well, since 1 and 0 aren’t probabilities, the strict answer to this question is “it can’t be done”, but I’m assuming you mean “before we know with such a certainty that in practice we can say it’s for sure”.)
Yes, I should be more specific about 2.
So let’s say the following are the first three questions you ask and their answers -
Q1. Do you think A is true? A. Yes. Q2. Do you think A=>B is true? A. Yes. Q3. Do you think B is true? A. No.
At this point, will you conclude that the person you are talking to is not rational? Or will you first want to ask him the following question.
Q4. Do you believe in Modus Ponens?
or in other words,
Q4. Do you think that if A and A=>B are both true then B should also be true?
If you think you should ask this question before deciding whether the person is rational or not, then why stop here? You should continue and ask him the following question as well.
Q5. Do you think that if you believe in Modus Ponens and if you also think that A and A=>B are true, then you should also believe that B is true as well?
And I can go on and on...
So the point is, if you think asking all these questions is necessary to decide whether the person is rational or not, then in effect any given person can have any arbitrary set of beliefs and he can still claim to be rational by adding a few extra beliefs to his belief system that say the n^th level of “Modus Ponens is wrong” for some suitably chosen n.
I think that belief in modus ponens is a part of the definition of “rational”, at least practically. So Q1 is enough. However, there are not much tortoises among the general public, so this type of question isn’t probably much helpful.