In poker you want to put more money in the pot with strong hands, and less money with weaker ones. However, your hand is secret information, and raising too much “polarizes your range,” giving your opponents the opportunity to outplay you. Finally, hands aren’t guaranteed—good hands can lose, and bad hands can win. So you need to bet big, but not too big, with your good hands.
So my buddy and I sit down at the table, and I get dealt a few strong hands in a row, but I raise too big with them—I’m overconfident—so I win a couple of small pots, and lose a big one. My buddy whispers to me, “dude...you’re overplaying your hands...” Ten minutes later I get dealt another good hand, and I consider his advice, but now I bet too small, underconfident, and miss out on value.
Replace the conversation with an internal monologue, and this is something you see all the time at the poker table. Once bitten, twice shy and all that.
I realized that there was a difference between the information I had and the information most commenters had; also that I had underestimated my Bayesian skills relative to the LW average, so that my panicked reaction to what I perceived as harsh criticism in a few of the comments was an overreaction brought about by insecurity.
I’m afraid I can’t accept your example at this point, because based on my priors and the information I have at hand (the probability of guilt that you gave was 10x lower than the next lowest estimate, it doesn’t look like you managed to convince anyone else to adopt your level of confidence during the discussions, absence of other evidence indicating that you have much better Bayesian skills than the LW average), I have to conclude that it’s much more likely that you were originally overconfident, and are now again.
Can you either show me that I’m wrong to make this conclusion based on the information I have, or give me some additional evidence to update on?
However, I disagree with your prior by a significant amount. The probability that [person in group] commits a murder within one year is small, but so is the probability that [person in group] is in contact with a victim. I would begin with the event [murder has happened], assign a high probability (like ~90%) to “the murderer knew the victim”, and then distribute those 90% among people who knew her (and work with ratios afterwards). I am not familiar enough with the case to do that know, but Amanda would probably get something around 10%, before any evidence or (missing) motive is taken into account.
I’m reading that as 54% plus some unknown but probably large proportion of the remainder: that includes a large percentage in which the victim’s relationship to the perpetrator is unknown, presumably due to lack of evidence. Your link gives this as 43.9%, but that doesn’t seem consistent with the table.
If you do look at the table, it says that 1,676 of 13,636 murders were known to be committed by strangers, or about 12%; the unknowns probably don’t break down into exactly the same categories (some relationships would be more difficult to establish than others), but I wouldn’t expect them to be wildly out of line with the rest of the numbers.
I agree with that interpretation. The 13636 murders contain: 1676 from strangers
5974 with some relation *5986 unknown
Based on the known cases only, I get 22% strangers. More than expected, but it might depend on the region, too (US <--> Europe). Based on that table, we can do even better: We can exclude reasons which are known to be unrelated to the specific case, and persons/relations which are known to be innocent (or non-existent). A bit tricky, as the table is “relation murderer → victim” and not the other direction, but it should be possible.
Can anyone give some examples of being underconfident, that happened as a result of overcorrecting for overconfidence?
I’ll give it a shot.
In poker you want to put more money in the pot with strong hands, and less money with weaker ones. However, your hand is secret information, and raising too much “polarizes your range,” giving your opponents the opportunity to outplay you. Finally, hands aren’t guaranteed—good hands can lose, and bad hands can win. So you need to bet big, but not too big, with your good hands.
So my buddy and I sit down at the table, and I get dealt a few strong hands in a row, but I raise too big with them—I’m overconfident—so I win a couple of small pots, and lose a big one. My buddy whispers to me, “dude...you’re overplaying your hands...” Ten minutes later I get dealt another good hand, and I consider his advice, but now I bet too small, underconfident, and miss out on value.
Replace the conversation with an internal monologue, and this is something you see all the time at the poker table. Once bitten, twice shy and all that.
My “revision” to my Amanda Knox post is one. I was right the first time.
How did you end up concluding that your original confidence level was correct after all?
I realized that there was a difference between the information I had and the information most commenters had; also that I had underestimated my Bayesian skills relative to the LW average, so that my panicked reaction to what I perceived as harsh criticism in a few of the comments was an overreaction brought about by insecurity.
I’m afraid I can’t accept your example at this point, because based on my priors and the information I have at hand (the probability of guilt that you gave was 10x lower than the next lowest estimate, it doesn’t look like you managed to convince anyone else to adopt your level of confidence during the discussions, absence of other evidence indicating that you have much better Bayesian skills than the LW average), I have to conclude that it’s much more likely that you were originally overconfident, and are now again.
Can you either show me that I’m wrong to make this conclusion based on the information I have, or give me some additional evidence to update on?
Interesting posts.
However, I disagree with your prior by a significant amount. The probability that [person in group] commits a murder within one year is small, but so is the probability that [person in group] is in contact with a victim. I would begin with the event [murder has happened], assign a high probability (like ~90%) to “the murderer knew the victim”, and then distribute those 90% among people who knew her (and work with ratios afterwards). I am not familiar enough with the case to do that know, but Amanda would probably get something around 10%, before any evidence or (missing) motive is taken into account.
A cursory search suggests 54% is more accurate. source, seventh bullet point. Also links to a table that could give better priors.
I’m reading that as 54% plus some unknown but probably large proportion of the remainder: that includes a large percentage in which the victim’s relationship to the perpetrator is unknown, presumably due to lack of evidence. Your link gives this as 43.9%, but that doesn’t seem consistent with the table.
If you do look at the table, it says that 1,676 of 13,636 murders were known to be committed by strangers, or about 12%; the unknowns probably don’t break down into exactly the same categories (some relationships would be more difficult to establish than others), but I wouldn’t expect them to be wildly out of line with the rest of the numbers.
I agree with that interpretation. The 13636 murders contain:
1676 from strangers 5974 with some relation
*5986 unknown
Based on the known cases only, I get 22% strangers. More than expected, but it might depend on the region, too (US <--> Europe). Based on that table, we can do even better: We can exclude reasons which are known to be unrelated to the specific case, and persons/relations which are known to be innocent (or non-existent). A bit tricky, as the table is “relation murderer → victim” and not the other direction, but it should be possible.