What change would you make that results in multiple rounds being required?
For example, if each player flips multiple coins, and then we share probability estimates for “all coins heads” or “majority of coins heads” or expectations for number of heads, in each case the first time I share my summary, I am sharing info that exactly tells the other player what information I have (and vice versa). So we will agree exactly from the second round onwards.
each player flips 3(? 10) coins of their own. (giving them various possibilities on what they think the whole coin-space looks like) They present their 90%, 99% confidence intervals on there being more than 4 (9) heads. Round 2 repeat. (also make statements based on what they think the state of play is ++ try to get to the answer before the other person. So make statements that can be misleading maybe?)
Not sure how easy it is to tease out that information for a human. maybe a computer could solve it. but not so much a human...
“I flipped 10 coins; My 90% confidence that there are at least 7 of each heads and tails is 90%. 99% confidence is 60%.”
confidence for “at least 10 heads and 6 tails” etc.
Here’s how that goes. I flip 3 coins. Say I get 2 heads. My probability estimate for “there are 4+ heads total” is now 4⁄8 (the probability that 2 or 3 of your coins are heads). For the full set of outcomes I can have, the options are: (0H, 0⁄8) (1H, 1⁄8) (2H, 4⁄8) (3H, 7⁄8). You perform the same reasoning. Then we each share our probability estimates with the other. Say that on the first round, we each share estimates of 50%. Then we can each deduce that the other saw exactly two heads, and on the second round (and forever after) both our estimates become 100%. For all possible outcomes, my first round probability tells you exactly how many heads I flipped, and vice versa; as soon as we share probabilities once, we both know the answer and agree.
(Also, you’re not using “confidence interval” in the correct manner. A confidence interval is defined over an expectation, not a posterior probability.)
I still don’t see any version of this that’s simpler than Finney’s that actually makes use of multiple rounds, and when I fix the math on Finney’s version it’s decidedly not simple.
My version of making this work would be choosing to only share limited information.
i.e. estimates of 33% heads. or estimates of >10% heads and >80% tails. Where they don’t sum to 100%, and will be harder to work out the “unknown space” in the middle. Limiting the prediction set to partial information. Also playing with multiple people should make it more complicated. Also an optional number of coin flips (optional to the person flipping coins and unknown to others within parameters)
Based on simple coin flip; other games:
Several coins;
scissors paper rock (and then iterated)
I am sure there are more small games that have a similar “known” problem space.
What change would you make that results in multiple rounds being required?
For example, if each player flips multiple coins, and then we share probability estimates for “all coins heads” or “majority of coins heads” or expectations for number of heads, in each case the first time I share my summary, I am sharing info that exactly tells the other player what information I have (and vice versa). So we will agree exactly from the second round onwards.
example I was thinking:
each player flips 3(? 10) coins of their own. (giving them various possibilities on what they think the whole coin-space looks like) They present their 90%, 99% confidence intervals on there being more than 4 (9) heads. Round 2 repeat. (also make statements based on what they think the state of play is ++ try to get to the answer before the other person. So make statements that can be misleading maybe?)
Not sure how easy it is to tease out that information for a human. maybe a computer could solve it. but not so much a human...
“I flipped 10 coins; My 90% confidence that there are at least 7 of each heads and tails is 90%. 99% confidence is 60%.”
confidence for “at least 10 heads and 6 tails” etc.
Here’s how that goes. I flip 3 coins. Say I get 2 heads. My probability estimate for “there are 4+ heads total” is now 4⁄8 (the probability that 2 or 3 of your coins are heads). For the full set of outcomes I can have, the options are: (0H, 0⁄8) (1H, 1⁄8) (2H, 4⁄8) (3H, 7⁄8). You perform the same reasoning. Then we each share our probability estimates with the other. Say that on the first round, we each share estimates of 50%. Then we can each deduce that the other saw exactly two heads, and on the second round (and forever after) both our estimates become 100%. For all possible outcomes, my first round probability tells you exactly how many heads I flipped, and vice versa; as soon as we share probabilities once, we both know the answer and agree.
(Also, you’re not using “confidence interval” in the correct manner. A confidence interval is defined over an expectation, not a posterior probability.)
I still don’t see any version of this that’s simpler than Finney’s that actually makes use of multiple rounds, and when I fix the math on Finney’s version it’s decidedly not simple.
My version of making this work would be choosing to only share limited information.
i.e. estimates of 33% heads. or estimates of >10% heads and >80% tails. Where they don’t sum to 100%, and will be harder to work out the “unknown space” in the middle. Limiting the prediction set to partial information. Also playing with multiple people should make it more complicated. Also an optional number of coin flips (optional to the person flipping coins and unknown to others within parameters)