I’d guess that getting this question “correct” almost requires having been trained to parse the problem in a certain formal way — namely, purely in terms of propositional logic.
Otherwise, a perfectly reasonable parsing of the problem would be equivalent to the following:
Before you stands a card-dealing robot, which has just been programmed to deal a hand of cards. Exactly one of the following statements is true of the robot’s hand-dealing algorithm:
The algorithm chooses from among only those hands that contain either a king or an ace (or both).
The algorithm chooses from among only those hands that contain either a queen or an ace (or both).
The robot now deals a hand. Which is more probable: the hand contains a King or the hand contains an Ace?
On this reading, Ace is most probable.
Indeed, this “algorithmic” reading seems like the more natural one if you’re used to trying to model the world as running according to some algorithm — that is, if, for you, “learning about the world” means “learning more about the algorithm that runs the word”.
The propositional-logic reading (the one endorsed by the OP) might be more natural if, for you, “learning about the world” means “learning more about the complicated conjunction-and-disjunction of propositions that precisely carves out the actual world from among the possible worlds.”
I’d guess that getting this question “correct” almost requires having been trained to parse the problem in a certain formal way — namely, purely in terms of propositional logic.
Indeed.
The propositional-logic reading (the one endorsed by the OP) might be more natural if, for you, “learning about the world” means “learning more about the complicated conjunction-and-disjunction of propositions that precisely carves out the actual world from among the possible worlds.”
In fact, when I got to this part, I actually skipped the rest of the article, thinking, “What sort of halfway competent cognitive scientist actually proposes that we represent the world using first-order symbolic logic? Where’s the statistical content?”
Hopefully I’m not being too harsh there, but I think we know enough about learning in the abstract and its relation to actually-existing human cognition to ditch purely formal theories in favor of expecting that any good theory of cognition should be able to show statistical behavior.
I’d guess that getting this question “correct” almost requires having been trained to parse the problem in a certain formal way — namely, purely in terms of propositional logic.
To get the question correct you just need to consider the falsity of the premises. You don’t neccesarily have to parse the problem in a fromal way, although that would help.
On this reading, Ace is most probable.
Ace is not more probable. It is imposible to have an ace in the dealt hand due to the requiement that only one of the premises is true. The basic idea is that one of the premises must be false which means that an ace is impossible. It is impossible because if an ace is in the dealt hand, then this means that both premises are true which violates the requirement (Exactly one of these statements is true). I have explained this further in this post
I think the problem here is that you’re talking to people who have been trained to think in terms of probabilities and probability trees, and furthermore, asking “what is more likely” automatically primes people to think in terms of a probability tree.
The way I originally thought about this was:
Suppose premise 1 is true. Then two possible combinations out of three might contain a king, so 2⁄3 probability for a king, and since I guess we’re supposed to assume that premise 1 has a 50% probability, then that means a king has a 2⁄6 = 1⁄3 probability overall. By the same logic, ace has a 2⁄3 probability in this branch, for a 1⁄3 probability overall.
Now suppose that premise 2 is true. By the same logic as above, this branch contributes an additional 1⁄3 to the ace’s probability mass. But this branch has no king, so the king acquires no probability mass.
Thus the chance of an ace is 2⁄3 and the chance of a king is 1⁄3.
In other words, I interpreted the “only one of the following premises is true” as “each of these two premises has a 50% probability”, to a large extent because the question of likeliness primed me to think in terms of probability trees, not logical possibilities.
Arguably, more careful thought would have suggested that possibly I shouldn’t think of this as a probability tree, since you never specified the relative probabilities of the premises, and giving them some relative probability was necessary for building the probability tree. On the other hand, in informal probability puzzles, it’s often common to assume that if we’re picking one option out of a set of N options, then each option has a probability of 1/N unless otherwise stated. Thus, this wording is ambiguous.
In one sense, me interpreting the problem in these terms could be taken to support the claims of model theory—after all, I was focusing on only one possible model at a time, and failed to properly consider their conjunction. But on the other hand, it’s also known that people tend to interpret things in the framework they’ve been taught to interpret them, and to use the context to guide their choice of the appropriate framework in the case of ambiguous wording. Here the context was the use of word of the “likely”, guiding the choice towards the probability tree framework. So I would claim that this example alone isn’t sufficient to distinguish between whether a person reading it gives the incorrect answer because of the predictions of model theory alone, or whether because the person misinterpreted the intent of the wording.
I updated the first example to one that is similar to the one above by Tyrrell_McAllister. Can you please let me know if it solves the issues you had with the original example.
Ace is more probable in the scenario that I described.
Of course, as you say, Ace is impossible in the scenario that you described (under its intended reading). The scenario that I described is a different one, one in which Ace is most probable. Nonetheless, I expect that someone not trained to do otherwise would likely misinterpret your original scenario as equivalent to mine. Thus, their wrong answer would, in that sense, be the right answer to the wrong question.
I’m sorry I am not really understanding your point. I have read your scenario multiple times and I see that the ace is impossible in it. Can you do me favour and read this post and then let me know if you still believe that the ace is not impossible.
Of course, as you say, Ace is impossible in the scenario that you described (under its intended reading). The scenario that I described is a different one, one in which Ace is most probable.
I don’t see any difference between your scenario and the one I had originally. The ace is impossible in your scenario as well because it is in both statements and you have the requirement that “Exactly one of the following statements is true” which means that the other must be false. If ace was in the hand, then both statements would be true, which cannot be the case as exactly one of the statements can be true, not both.
Also, I rewrote the first example in the post so that it is similar to yours.
Last I checked, your edits haven’t changed which answer is correct in your scenario. As you’ve explained, the Ace is impossible given your set-up.
(By the way, I thought that the earliest version of your wording was perfectly adequate, provided that the reader was accustomed to puzzles given in a “propositional” form. Otherwise, I expect, the reader will naturally assume something like the “algorithmic” scenario that I’ve been describing.)
In my scenario, the information given is not about which propositions are true about the outcome, but rather about which algorithms are controlling the outcome.
To highlight the difference, let me flesh out my story.
Let K be the set of card-hands that contain at least one King, let A be the set of card-hands that contain at least one Ace, and let Q be the set of card-hands that contain at least one Queen.
I’m programming the card-dealing robot. I’ve prepared two different algorithms, either of which could be used by the robot:
Algorithm 1: Choose a hand uniformly at random from K ∪ A, and then deal that hand.
Algorithm 2: Choose a hand uniformly at random from Q ∪ A, and then deal that hand.
These are two different algorithms. If the robot is programmed with one of them, it cannot be programmed with the other. That is, the algorithms are mutually exclusive. Moreover, I am going to use one or the other of them. These two algorithms exhaust all of the possibilities.
In other words, of the two algorithm-descriptions above, exactly one of them will truthfully describe the robot’s actual algorithm.
I flip a coin to determine which algorithm will control the robot. After the coin flip, I program the robot accordingly, supply it with cards, and bring you to the table with the robot.
You know all of the above.
Now the robot deals you a hand, face down. Based on what you know, which is more probable: that the hand contains a King, or that the hand contains an Ace?
Thanks for this. I understand your point now.
I was misreading this:
In my scenario, the information given is not about which propositions are true about the outcome, but rather about which algorithms are controlling the outcome.
I’d guess that getting this question “correct” almost requires having been trained to parse the problem in a certain formal way — namely, purely in terms of propositional logic.
Otherwise, a perfectly reasonable parsing of the problem would be equivalent to the following:
On this reading, Ace is most probable.
Indeed, this “algorithmic” reading seems like the more natural one if you’re used to trying to model the world as running according to some algorithm — that is, if, for you, “learning about the world” means “learning more about the algorithm that runs the word”.
The propositional-logic reading (the one endorsed by the OP) might be more natural if, for you, “learning about the world” means “learning more about the complicated conjunction-and-disjunction of propositions that precisely carves out the actual world from among the possible worlds.”
Indeed.
In fact, when I got to this part, I actually skipped the rest of the article, thinking, “What sort of halfway competent cognitive scientist actually proposes that we represent the world using first-order symbolic logic? Where’s the statistical content?”
Hopefully I’m not being too harsh there, but I think we know enough about learning in the abstract and its relation to actually-existing human cognition to ditch purely formal theories in favor of expecting that any good theory of cognition should be able to show statistical behavior.
Error
To get the question correct you just need to consider the falsity of the premises. You don’t neccesarily have to parse the problem in a fromal way, although that would help.
Ace is not more probable. It is imposible to have an ace in the dealt hand due to the requiement that only one of the premises is true. The basic idea is that one of the premises must be false which means that an ace is impossible. It is impossible because if an ace is in the dealt hand, then this means that both premises are true which violates the requirement (Exactly one of these statements is true). I have explained this further in this post
I think the problem here is that you’re talking to people who have been trained to think in terms of probabilities and probability trees, and furthermore, asking “what is more likely” automatically primes people to think in terms of a probability tree.
The way I originally thought about this was:
Suppose premise 1 is true. Then two possible combinations out of three might contain a king, so 2⁄3 probability for a king, and since I guess we’re supposed to assume that premise 1 has a 50% probability, then that means a king has a 2⁄6 = 1⁄3 probability overall. By the same logic, ace has a 2⁄3 probability in this branch, for a 1⁄3 probability overall.
Now suppose that premise 2 is true. By the same logic as above, this branch contributes an additional 1⁄3 to the ace’s probability mass. But this branch has no king, so the king acquires no probability mass.
Thus the chance of an ace is 2⁄3 and the chance of a king is 1⁄3.
In other words, I interpreted the “only one of the following premises is true” as “each of these two premises has a 50% probability”, to a large extent because the question of likeliness primed me to think in terms of probability trees, not logical possibilities.
Arguably, more careful thought would have suggested that possibly I shouldn’t think of this as a probability tree, since you never specified the relative probabilities of the premises, and giving them some relative probability was necessary for building the probability tree. On the other hand, in informal probability puzzles, it’s often common to assume that if we’re picking one option out of a set of N options, then each option has a probability of 1/N unless otherwise stated. Thus, this wording is ambiguous.
In one sense, me interpreting the problem in these terms could be taken to support the claims of model theory—after all, I was focusing on only one possible model at a time, and failed to properly consider their conjunction. But on the other hand, it’s also known that people tend to interpret things in the framework they’ve been taught to interpret them, and to use the context to guide their choice of the appropriate framework in the case of ambiguous wording. Here the context was the use of word of the “likely”, guiding the choice towards the probability tree framework. So I would claim that this example alone isn’t sufficient to distinguish between whether a person reading it gives the incorrect answer because of the predictions of model theory alone, or whether because the person misinterpreted the intent of the wording.
I updated the first example to one that is similar to the one above by Tyrrell_McAllister. Can you please let me know if it solves the issues you had with the original example.
That does look better! Though since I can’t look at it with fresh eyes, I can’t say how I’d interpret it if I were to see it for the first time now.
Ace is more probable in the scenario that I described.
Of course, as you say, Ace is impossible in the scenario that you described (under its intended reading). The scenario that I described is a different one, one in which Ace is most probable. Nonetheless, I expect that someone not trained to do otherwise would likely misinterpret your original scenario as equivalent to mine. Thus, their wrong answer would, in that sense, be the right answer to the wrong question.
I’m sorry I am not really understanding your point. I have read your scenario multiple times and I see that the ace is impossible in it. Can you do me favour and read this post and then let me know if you still believe that the ace is not impossible.
I don’t see any difference between your scenario and the one I had originally. The ace is impossible in your scenario as well because it is in both statements and you have the requirement that “Exactly one of the following statements is true” which means that the other must be false. If ace was in the hand, then both statements would be true, which cannot be the case as exactly one of the statements can be true, not both.
Also, I rewrote the first example in the post so that it is similar to yours.
Last I checked, your edits haven’t changed which answer is correct in your scenario. As you’ve explained, the Ace is impossible given your set-up.
(By the way, I thought that the earliest version of your wording was perfectly adequate, provided that the reader was accustomed to puzzles given in a “propositional” form. Otherwise, I expect, the reader will naturally assume something like the “algorithmic” scenario that I’ve been describing.)
In my scenario, the information given is not about which propositions are true about the outcome, but rather about which algorithms are controlling the outcome.
To highlight the difference, let me flesh out my story.
Let K be the set of card-hands that contain at least one King, let A be the set of card-hands that contain at least one Ace, and let Q be the set of card-hands that contain at least one Queen.
I’m programming the card-dealing robot. I’ve prepared two different algorithms, either of which could be used by the robot:
Algorithm 1: Choose a hand uniformly at random from K ∪ A, and then deal that hand.
Algorithm 2: Choose a hand uniformly at random from Q ∪ A, and then deal that hand.
These are two different algorithms. If the robot is programmed with one of them, it cannot be programmed with the other. That is, the algorithms are mutually exclusive. Moreover, I am going to use one or the other of them. These two algorithms exhaust all of the possibilities.
In other words, of the two algorithm-descriptions above, exactly one of them will truthfully describe the robot’s actual algorithm.
I flip a coin to determine which algorithm will control the robot. After the coin flip, I program the robot accordingly, supply it with cards, and bring you to the table with the robot.
You know all of the above.
Now the robot deals you a hand, face down. Based on what you know, which is more probable: that the hand contains a King, or that the hand contains an Ace?
Thanks for this. I understand your point now. I was misreading this: