provided I think a question and its negation are equally likely to have been asked, there is a 50% chance that the answer to the question you have asked is yes.
Well, yes. But ought I believe that a yes/no question I have no idea about is as likely as its negation to have been asked? (Especially if it’s being asked implicitly by a situation, rather than explicitly by a human?)
Ratio of true statements to false ones: low.
Probability TraderJoe wants to make TheOtherDave look foolish: moderate, slightly on the higher end.
Ratio of the probability that giving an obviously false statement an answer of relatively high probability would make TheOtherDave look foolish to the probability that giving an obviously true statement a relatively low probability would make TheOtherDave look foolish: moderately high.
Probability that the statement is neither true nor false: low.
Ratio of the probability that giving an obviously false statement an answer of relatively high probability would make TheOtherDave look foolish to the probability that giving an obviously true statement a relatively low probability would make TheOtherDave look foolish: moderately high.
That’s interesting.
I considered a proposition like this, decided the ratio was roughly even, concluded that TraderJoe might therefore attempt to predict my answer (and choose their question so I’d be wrong), decided they’d have no reliable basis on which to do so and would know that, and ultimately discarded the whole line of reasoning.
I considered a proposition like this, decided the ratio was roughly even, concluded that TraderJoe might therefore attempt to predict my answer (and choose their question so I’d be wrong),
I figured that it would be more embarrassing to say something like “It is true that I am a sparkly unicorn” than to say “It is false that an apple is a fruit”. Falsehoods are much more malleable, largely as an effect of the fact that there are so many more of them than truths, also because they don’t have to be consistent. Since falsehoods are more malleable it seems that they’d be more likely to be ones used in an attempt to insult someone.
decided they’d have no reliable basis on which to do so and would know that, and ultimately discarded the whole line of reasoning.
My heuristic in situations with recursive mutual modeling is to assume that everyone else will discard whatever line of reasoning is recursive. I then go one layer deeper into the recursion than whatever the default assumption is. It works well.
Read the book years ago, but can’t recall if that phrase is in there. In any case, yes, that’s what I was referring to… it’s my favorite fictional portrayal of recursive mutual modeling.
it’s my favorite fictional portrayal of recursive mutual modeling.
The one I always think of is Poe’s “The Purloined Letter”:
But he perpetually errs by being too deep or too shallow, for the matter in hand; and many a schoolboy is a better reasoner than he. I knew one about eight years of age, whose success at guessing in the game of ‘even and odd’ attracted universal admiration. This game is simple, and is played with marbles. One player holds in his hand a number of these toys, and demands of another whether that number is even or odd. If the guess is right, the guesser wins one; if wrong, he loses one. The boy to whom I allude won all the marbles of the school. Of course he had some principle of guessing; and this lay in mere observation and admeasurement of the astuteness of his opponents. For example, an arrant simpleton is his opponent, and, holding up his closed hand, asks, ‘are they even or odd?’ Our schoolboy replies, ‘odd,’ and loses; but upon the second trial he wins, for he then says to himself, the simpleton had them even upon the first trial, and his amount of cunning is just sufficient to make him have them odd upon the second; I will therefore guess odd’; --he guesses odd, and wins. Now, with a simpleton a degree above the first, he would have reasoned thus: ‘This fellow finds that in the first instance I guessed odd, and, in the second, he will propose to himself upon the first impulse, a simple variation from even to odd, as did the first simpleton; but then a second thought will suggest that this is too simple a variation, and finally he will decide upon putting it even as before. I will therefore guess even’ guesses even, and wins. Now this mode of reasoning in the schoolboy, whom his fellows termed “lucky,”—what, in its last analysis, is it?”
“It is merely,” I said, “an identification of the reasoner’s intellect with that of his opponent.”
I wonder if there is an older appearance of this trope or if this is the Ur Example? (*checks TvTropes). The only older one listed is from the Romance of the Three Kingdoms, so Poe’s might be the Ur Example in Western culture.
My heuristic in situations with recursive mutual modeling is to assume that everyone else will discard whatever line of reasoning is recursive. I then go one layer deeper into the recursion than whatever the default assumption is. It works well.
Preempt: None of you have any way of knowing whether this is a lie.
“None of you have any way of knowing whether this is a lie” is false because although you can’t definitively prove what my process is or isn’t you’ll still have access to information that allows you to assess and evaluate whether I was probably telling the truth.
Although “none of you have any way of knowing whether this is a lie” is false and thus my first instance of “the parent of this comment is a lie” seems justified, in reality the first instance of that statement is not true. The first instance of that statement is a lie because although “none of you have any way of knowing whether or not this is true” is false, it does not follow that it was a lie. In actuality, I thought that it was true at the time that I posted it, and only realized afterwards that it was false. There was no intent to deceive.
Therefore the grandparent of this comment is true, the greatgrandparent is true, the greatgreatgrandparent is false, and the greatgreatgreat grandparent is inaccurate.
My answer is roughly the same as TimS’s… it mostly depends on “Would TraderJoe pick a true statement in this context or a false one?” Which in turn mostly depends on “Would a randomly selected LWer pick a true statement in this context or a false one?” since I don’t know much about you as a distinct individual.
I seem to have a prior probability somewhat above 50% for “true”, though thinking about it I’m not sure why exactly that is.
Looking it up, it amuses me to discover that I’m still not sure if it’s true.
It seems like my guess should be based on how likely I think it is that your are trying to trick me in some sense. I assume you didn’t pick a sentence at random.
The transliteration does, but the actual Arabic means “V’z Sebz Nzrevpn”.
So in fact TraderJoe’s prediction of 0.5 was a simple average over the two statements given, and everyone else giving a prediction failed to take into account that the answer could be neither “true” nor “false”.
All questions that you encounter will be asked by a human. I get what you mean though, if other humans are asking a human a question then distortions are probably magnified.
Some questions are implicitly raised by a situation. “Is this coffee cup capable of holding coffee without spilling it?”, for example. When I pour coffee into the cup, I am implicitly expressing more than 50% confidence that the answer is “yes”.
What I’m saying is that what’s implicit is a fact about you, not the situation, and the way the question is formed is partially determined by you. I was vague in saying so, however.
I agree that the way the question is formed is partially determined by me. I agree that there’s a relevant implicit fact about me. I disagree that there’s no relevant implicit fact about the situation.
Nothing can be implicit without interpretation, sometimes the apparent implications of a situation are just misguided notions that we have inside our heads. You’re going to have a natural tendency to form your questions in certain ways, and some of these ways will lead you to asking nonsensical questions, such as questions with contradictory expectations.
I agree that the apparent implications of a situation are notions in our heads, and that sometimes those notions are nonsensical and/or contradictory and/or misguided.
I disagree with this. The reason you shouldn’t assign 50% to the proposition “I will win the lottery” is because you have some understanding of the odds behind the lottery. If a yes/no question which I have no idea about is asked, I am 50% confident that the answer is yes. The reason for this is point 2: provided I think a question and its negation are equally likely to have been asked, there is a 50% chance that the answer to the question you have asked is yes.
That’s only reasonable if some agent is trying to maximize the information content of your answer. The vast majority of possible statements of a given length are false.
Sure, but how often do you see each of the following sentences in some kind of logic discussion:
2+2=3
2+2=4
2+2=5
2+2=6
2+2=7
I have seen the first and third from time to time, the second more frequently than any other, and virtually never see 2+2 = n for n > 5. Not all statements are shown with equal frequency. My guess is that the percentage of the time when “2+2 = x” is written in contexts where the statement is for a true/false logic proposition rather than an equation x = 4 is more common than all other values put together.
The vast majority of possible statements of a given length are false.
That’s surely an artifice of human languages and even so it would depend on whether the statement is mostly structured using “or” or using “and”.
There’s a 1-to-1 mapping between true and false statements (just add ‘the following is false:’ in front of each statement to get the opposite). In a language where ‘the following is false’ is assumed, the reverse would be actual.
The sky is not blue. The sky is not red. The sky is not yellow. The sky is not pink.
Anyway, it depends on what you mean by “statement”. The vast majority of all possible strings are ungrammatical, the vast majority of all grammatical sentences are meaningless, and most of the rest refer to different propositions if uttered in different contexts (“the sky is ochre” refers to a true proposition if uttered on Mars, or when talking about a picture taken on Mars).
The typical mode of communication is an attempt to convey information by making true statements. One only brings up false statements in much rarer circustances, such as when one entity’s information contradicts another entity’s information. Thus, an optimized language is one where true statements are high in information.
Otherwise, to communicate efficiently, you’d have to go around making a bunch of statements with an extraneous not above the default for the language, which is wierd.
This has the potential to be trans-human, I think.
But whether a statement is true or false depends on things other than the language itself. (The sentence “there were no aces or kings in the flop” is the same length whether or not there were any aces or kings in the flop.) The typical mode of communication is an attempt to convey information by making true but non-tautological statements (for certain values of “typical”—actually implicatures are often at least as important as truth conditions). So, how would such a mechanism work?
You need to be more specific about what exactly it is I said that you’re disputing—I am not sure what it is that I must ‘consider’ about these statements.
On further consideration, I take it back. I was trying to make the point that “Sky not blue” != “Sky is pink”. Which is true, but does not counter your point that (P or !P) must be true by definition.
It is the case that the vast majority of grammatical statements of a give length are false. But until we have a formal way of saying that statements like “The Sky is Blue” or “The Sky is Pink” are more fundamental than statements like “The Sky is Not Blue” or “The Sky is Not Pink,” you must be correct that this is an artifact of the language used to express the ideas. For example, a language where negation was the default and additional length was needed to assert truth would have a different proportion of true and false statements for any given sentence length.
Also, lots of downvotes in this comment path (on both sides of the discussion). Any sense of why?
That’s surely an artifice of human languages and even so it would depend on whether the statement is mostly structured using “or” or using “and”.
It’s true of any language optimized for conveying information. The information content of a statement is reciprocal to it’s prior probability, and therefore more or less proportional to how many other statements of the same form would be false.
In your counter example the information content of a statement in the basic form decreases with length.
The reason you shouldn’t assign 50% to the proposition “I will win the lottery” is because you have some understanding of the odds behind the lottery.
Yup. Similarly you don’t assign 50% to the proposition “X will change”, where X is a relatively long-lasting feature of the world around you—long-lasting enough to have been noticed as such in the first place and given rise to the hypothesis that it will change. (In the Le Pen prediction, the important word is “cease”, not “Le Pen” or “election”.)
ETA: what I’m getting at is that nobody gives a damn about the class of question “yes/no question which I have no idea about”. The subthread about these questions is a red herring. When a question comes up about “world events”, you have some idea of the odds for change vs status quo based on the general category of things that the question is about. For instance many GJP questions are of the form “Will Prime Minister of Country X resign or otherwise vacate that position within the next six months?”. Even if you are not familiar with the politics of Country X, you have some grounds for thinking that the “No” side of the question is more likely than the “Yes” side—for having an overall status quo bias on this type of question.
[comment deleted]
Well, yes. But ought I believe that a yes/no question I have no idea about is as likely as its negation to have been asked? (Especially if it’s being asked implicitly by a situation, rather than explicitly by a human?)
[comment deleted]
Ratio of true statements to false ones: low. Probability TraderJoe wants to make TheOtherDave look foolish: moderate, slightly on the higher end. Ratio of the probability that giving an obviously false statement an answer of relatively high probability would make TheOtherDave look foolish to the probability that giving an obviously true statement a relatively low probability would make TheOtherDave look foolish: moderately high. Probability that the statement is neither true nor false: low.
Conclusion: أنا من (أمريك is most likely false.
That’s interesting.
I considered a proposition like this, decided the ratio was roughly even, concluded that TraderJoe might therefore attempt to predict my answer (and choose their question so I’d be wrong), decided they’d have no reliable basis on which to do so and would know that, and ultimately discarded the whole line of reasoning.
I figured that it would be more embarrassing to say something like “It is true that I am a sparkly unicorn” than to say “It is false that an apple is a fruit”. Falsehoods are much more malleable, largely as an effect of the fact that there are so many more of them than truths, also because they don’t have to be consistent. Since falsehoods are more malleable it seems that they’d be more likely to be ones used in an attempt to insult someone.
My heuristic in situations with recursive mutual modeling is to assume that everyone else will discard whatever line of reasoning is recursive. I then go one layer deeper into the recursion than whatever the default assumption is. It works well.
Sadly, I appear to lack your dizzying intellect.
I used to play a lot of Rock, Paper, Scissors; I’m pretty much a pro.
It is possible that you may have missed TheOtherDave’s allusion there.
The phrase sounded familiar, but I don’t recognize where it’s from and a Google search for “lack your dizzying intellect” yielded no results.
Wait. Found it. Princess Bride? Is it in the book too, or just the movie?
Read the book years ago, but can’t recall if that phrase is in there. In any case, yes, that’s what I was referring to… it’s my favorite fictional portrayal of recursive mutual modeling.
The one I always think of is Poe’s “The Purloined Letter”:
I wonder if there is an older appearance of this trope or if this is the Ur Example? (*checks TvTropes). The only older one listed is from the Romance of the Three Kingdoms, so Poe’s might be the Ur Example in Western culture.
I’m not sure what this phrase means.
It means making an accurate mental simulation of your opponent’s mental process to predict to which level they will iterate.
Here it is—the classic “battle of wits” scene from The Princess Bride. (This clip cuts off before the explanation of the trick used by the victor.)
Both. [EDITED: oops, no, misread you. Definitely in the movie; haven’t read the book.]
Preempt: None of you have any way of knowing whether this is a lie.
The parent of this comment (yes, this one) is a lie.
The parent of this comment (yes, this one) is a lie.
The parent of this comment is true. On my honor as a rationalist.
I would like people to try to solve the puzzle.
This comment (yes, this one) is true.
I think the solution is that you have no honor as a rationalist.
The solution I had in mind is:
“None of you have any way of knowing whether this is a lie” is false because although you can’t definitively prove what my process is or isn’t you’ll still have access to information that allows you to assess and evaluate whether I was probably telling the truth.
Although “none of you have any way of knowing whether this is a lie” is false and thus my first instance of “the parent of this comment is a lie” seems justified, in reality the first instance of that statement is not true. The first instance of that statement is a lie because although “none of you have any way of knowing whether or not this is true” is false, it does not follow that it was a lie. In actuality, I thought that it was true at the time that I posted it, and only realized afterwards that it was false. There was no intent to deceive.
Therefore the grandparent of this comment is true, the greatgrandparent is true, the greatgreatgrandparent is false, and the greatgreatgreat grandparent is inaccurate.
This whole line of riddling occurred because:
I wanted to confuse people, so they failed to properly evaluate the way I model people.
I wanted to distract people, so they chose not to bother properly evaluating the way I model people.
I wanted to amuse myself by pretending that I was the kind of person who cared about the above two.
I was wondering whether anyone would call me out on any of those.
I’m severely tempted to just continue making replies to myself and see how far down the rabbit hole I can get.
I laughed. The solution involves the relativity of wrong, if that helps.
PBEERPG.
I assume you mean without looking it up.
My answer is roughly the same as TimS’s… it mostly depends on “Would TraderJoe pick a true statement in this context or a false one?” Which in turn mostly depends on “Would a randomly selected LWer pick a true statement in this context or a false one?” since I don’t know much about you as a distinct individual.
I seem to have a prior probability somewhat above 50% for “true”, though thinking about it I’m not sure why exactly that is.
Looking it up, it amuses me to discover that I’m still not sure if it’s true.
This is a perfect situation for a poll.
How probable is it that TraderJoe’s statement, in the parent comment, is true?
[pollid:116]
I voted with what I thought my previous estimate was before I’d checked via rot13.
[comment deleted]
It seems like my guess should be based on how likely I think it is that your are trying to trick me in some sense. I assume you didn’t pick a sentence at random.
[comment deleted]
[comment deleted]
The transliteration does, but the actual Arabic means “V’z Sebz Nzrevpn”.
So in fact TraderJoe’s prediction of 0.5 was a simple average over the two statements given, and everyone else giving a prediction failed to take into account that the answer could be neither “true” nor “false”.
Not according to google translate. Incidentally, that string is particularly easy to uncypher by inspection.
[comment deleted]
Yeah, that’s an interesting discrepancy.
All questions that you encounter will be asked by a human. I get what you mean though, if other humans are asking a human a question then distortions are probably magnified.
Some questions are implicitly raised by a situation. “Is this coffee cup capable of holding coffee without spilling it?”, for example. When I pour coffee into the cup, I am implicitly expressing more than 50% confidence that the answer is “yes”.
What I’m saying is that what’s implicit is a fact about you, not the situation, and the way the question is formed is partially determined by you. I was vague in saying so, however.
I agree that the way the question is formed is partially determined by me. I agree that there’s a relevant implicit fact about me. I disagree that there’s no relevant implicit fact about the situation.
Nothing can be implicit without interpretation, sometimes the apparent implications of a situation are just misguided notions that we have inside our heads. You’re going to have a natural tendency to form your questions in certain ways, and some of these ways will lead you to asking nonsensical questions, such as questions with contradictory expectations.
I agree that the apparent implications of a situation are notions in our heads, and that sometimes those notions are nonsensical and/or contradictory and/or misguided.
That’s only reasonable if some agent is trying to maximize the information content of your answer. The vast majority of possible statements of a given length are false.
Sure, but how often do you see each of the following sentences in some kind of logic discussion: 2+2=3 2+2=4 2+2=5 2+2=6 2+2=7
I have seen the first and third from time to time, the second more frequently than any other, and virtually never see 2+2 = n for n > 5. Not all statements are shown with equal frequency. My guess is that the percentage of the time when “2+2 = x” is written in contexts where the statement is for a true/false logic proposition rather than an equation x = 4 is more common than all other values put together.
That’s surely an artifice of human languages and even so it would depend on whether the statement is mostly structured using “or” or using “and”.
There’s a 1-to-1 mapping between true and false statements (just add ‘the following is false:’ in front of each statement to get the opposite). In a language where ‘the following is false’ is assumed, the reverse would be actual.
I’m not sure your statement is true.
Consider:
The sky is blue.
The sky is red.
The sky is yellow.
The sky is pink.
The sky is not blue. The sky is not red. The sky is not yellow. The sky is not pink.
Anyway, it depends on what you mean by “statement”. The vast majority of all possible strings are ungrammatical, the vast majority of all grammatical sentences are meaningless, and most of the rest refer to different propositions if uttered in different contexts (“the sky is ochre” refers to a true proposition if uttered on Mars, or when talking about a picture taken on Mars).
The typical mode of communication is an attempt to convey information by making true statements. One only brings up false statements in much rarer circustances, such as when one entity’s information contradicts another entity’s information. Thus, an optimized language is one where true statements are high in information.
Otherwise, to communicate efficiently, you’d have to go around making a bunch of statements with an extraneous not above the default for the language, which is wierd.
This has the potential to be trans-human, I think.
But whether a statement is true or false depends on things other than the language itself. (The sentence “there were no aces or kings in the flop” is the same length whether or not there were any aces or kings in the flop.) The typical mode of communication is an attempt to convey information by making true but non-tautological statements (for certain values of “typical”—actually implicatures are often at least as important as truth conditions). So, how would such a mechanism work?
But, on the other hand:
The sky is not blue. The sky is not red. The sky is not yellow. The sky is not pink.
You need to be more specific about what exactly it is I said that you’re disputing—I am not sure what it is that I must ‘consider’ about these statements.
On further consideration, I take it back. I was trying to make the point that “Sky not blue” != “Sky is pink”. Which is true, but does not counter your point that (P or !P) must be true by definition.
It is the case that the vast majority of grammatical statements of a give length are false. But until we have a formal way of saying that statements like “The Sky is Blue” or “The Sky is Pink” are more fundamental than statements like “The Sky is Not Blue” or “The Sky is Not Pink,” you must be correct that this is an artifact of the language used to express the ideas. For example, a language where negation was the default and additional length was needed to assert truth would have a different proportion of true and false statements for any given sentence length.
Also, lots of downvotes in this comment path (on both sides of the discussion). Any sense of why?
It’s true of any language optimized for conveying information. The information content of a statement is reciprocal to it’s prior probability, and therefore more or less proportional to how many other statements of the same form would be false.
In your counter example the information content of a statement in the basic form decreases with length.
Yup. Similarly you don’t assign 50% to the proposition “X will change”, where X is a relatively long-lasting feature of the world around you—long-lasting enough to have been noticed as such in the first place and given rise to the hypothesis that it will change. (In the Le Pen prediction, the important word is “cease”, not “Le Pen” or “election”.)
ETA: what I’m getting at is that nobody gives a damn about the class of question “yes/no question which I have no idea about”. The subthread about these questions is a red herring. When a question comes up about “world events”, you have some idea of the odds for change vs status quo based on the general category of things that the question is about. For instance many GJP questions are of the form “Will Prime Minister of Country X resign or otherwise vacate that position within the next six months?”. Even if you are not familiar with the politics of Country X, you have some grounds for thinking that the “No” side of the question is more likely than the “Yes” side—for having an overall status quo bias on this type of question.