This was interesting right up until “if GPT4 did the math right”. I believe GPT4 is terrible with story problems like that, and with math like that story problem describes. Unless it used a plugin and described its solution in a way you can include for checking. Bite the bullet and do the math. You can’t do fermi estimates without a little math.
Ah dang sorry, was not aware of this. Brute force re-taught myself how to do this quick 10^5 / (5-2)! = 100,000 / 6 = 1⁄16,666. You are right, that was off by more than a factor of ten! Thanks for the tip.
Edit: agghh I hate combinatorials. This seemed way off to me, I thought the original seemed correct. GPT had originally explained the math but I didn’t understand the notation, after working on the problem again for a while I had it explain it’s method to me in easier to understand language and I’m actually pretty sure it was correct.
If you explained the math in a footnote or something, you’d probably get some math collaboration from readers. I don’t know how to do that one off the top of my head, but it’s interesting.
My sense is that GPT isn’t trustworthy on that, and it could be off by a lot, so it’s necessary to include your (or its) math if you’re not sure about it yourself.
I’m quite sure now, I came to the same conclusion independently of GPT after getting a hint from it, which itself I had already almost guessed.
A woman having the top 10% of any characteristic is almost the same as rolling a 10 sided die and coming up with a 1 (this was the actual problem I presented GPT with, and when it answered it did so in what looked like a hybrid of code and text so I’m quite sure it is computing this somehow).
What was clearly wrong with the first math was that if I roll just three die, there would already be1*10^3 or1/1000 chance of getting all 1’s. And if I roll five die, there would be a much higher, not lower chance that I get at least three 1’s.
When rolling five die, there are 10 different possible combinations of those five die that have exactly three out of five 1’s, and it’s a little bit more complicated than this, but almost all of the probability mass comes from rolling three 1’s, since rolling four or five 1’s is far less likely. So you get very close (much closer than needed for a Fermi estimate) to the answer by simply multiplying the 10 possible combinations by 1/1000 chance that each of those combinations will be all 1’s, for a total of about 1⁄100 or ~1%. Pretty basic once you see it, I would be surprised if this is incorrect.
I definitely like the estimation method. I’m totally convinced that the answer is higher than 1/1000 as you describe. The bit about dividing that by the number of ways you can roll three ones on five dice sounds sketchy—I can’t tell for sure that that’s sensible. But it does sound intuitively right. There are five ways to roll 4 1′s (simplifying it to ten sided dice is a great move for my intuition), ten ways to roll 3 1s, and 1 way to roll 5 ones; so that’s 16. That would be 1.6%, which is different than GPT4′s .86%. So I think that does get into the ballpark, like you said, but it’s not exactly right. Anyway, we’re into the details. I think you’re right about the order of magnitude, and that’s good enough for a Fermi estimate.
Yes, that’s the main place I’m still uncertain, the ten combinations of three 1’s have to be statistically independent which I’m having trouble visualizing; if you rolled six die, the chance that either three pre-selected specific die would be 1’s or the other three die would all be 1’s could just be added together.
But since you have five die, and you are asking whether three of them will be 1’s, or another overlapping set will be 1’s, you have to somehow get these to be statistically independent. Part of that is actually what I left out (that GPT told me, so not sure but sounds sensible), you take the chance that the other two leftover die will both not be 1’s; there’s a 9⁄10 chance that each will not be a 1, so .81 chance that both will not be ones, and you actually have to multiply this .81 by the 1/1000 for each set of three 1’s. So that slightly lowes that part of the estimate to (1/100010).81=.81%
So you have excluded the extra 1’s from the sets of three 1’s but then you have to do the same calculation for the sets of four 1’s and the one set of five 1’s. The set a five 1’s is actually very easy, there’s a 1⁄10 chance that each will land on one, so all of them together is 10^5=1/100,000, adding only .001% to the final calculation, and the four 1’s are also about a factor of 5 less likely then three 1’s because you have to roll another 1 to get four 1’s. So you have to roll four 1’s and one not-1, or (1/10,000).95=.045%
.81+.001+.045=.856%
Still not 100% sure because I suck at combinatorials but this seems pretty likely to be correct. Mainly going off that 1⁄1,000 intuition for any three sets of 1’s and that being repeated ~10 times because there are five die, and the rest sounds sensible
This was interesting right up until “if GPT4 did the math right”. I believe GPT4 is terrible with story problems like that, and with math like that story problem describes. Unless it used a plugin and described its solution in a way you can include for checking. Bite the bullet and do the math. You can’t do fermi estimates without a little math.
Ah dang sorry, was not aware of this. Brute force re-taught myself how to do this quick 10^5 / (5-2)! = 100,000 / 6 = 1⁄16,666. You are right, that was off by more than a factor of ten! Thanks for the tip.
Edit: agghh I hate combinatorials. This seemed way off to me, I thought the original seemed correct. GPT had originally explained the math but I didn’t understand the notation, after working on the problem again for a while I had it explain it’s method to me in easier to understand language and I’m actually pretty sure it was correct.
If you explained the math in a footnote or something, you’d probably get some math collaboration from readers. I don’t know how to do that one off the top of my head, but it’s interesting.
My sense is that GPT isn’t trustworthy on that, and it could be off by a lot, so it’s necessary to include your (or its) math if you’re not sure about it yourself.
I’m quite sure now, I came to the same conclusion independently of GPT after getting a hint from it, which itself I had already almost guessed.
A woman having the top 10% of any characteristic is almost the same as rolling a 10 sided die and coming up with a 1 (this was the actual problem I presented GPT with, and when it answered it did so in what looked like a hybrid of code and text so I’m quite sure it is computing this somehow).
What was clearly wrong with the first math was that if I roll just three die, there would already be1*10^3 or1/1000 chance of getting all 1’s. And if I roll five die, there would be a much higher, not lower chance that I get at least three 1’s.
When rolling five die, there are 10 different possible combinations of those five die that have exactly three out of five 1’s, and it’s a little bit more complicated than this, but almost all of the probability mass comes from rolling three 1’s, since rolling four or five 1’s is far less likely. So you get very close (much closer than needed for a Fermi estimate) to the answer by simply multiplying the 10 possible combinations by 1/1000 chance that each of those combinations will be all 1’s, for a total of about 1⁄100 or ~1%. Pretty basic once you see it, I would be surprised if this is incorrect.
I definitely like the estimation method. I’m totally convinced that the answer is higher than 1/1000 as you describe. The bit about dividing that by the number of ways you can roll three ones on five dice sounds sketchy—I can’t tell for sure that that’s sensible. But it does sound intuitively right. There are five ways to roll 4 1′s (simplifying it to ten sided dice is a great move for my intuition), ten ways to roll 3 1s, and 1 way to roll 5 ones; so that’s 16. That would be 1.6%, which is different than GPT4′s .86%. So I think that does get into the ballpark, like you said, but it’s not exactly right. Anyway, we’re into the details. I think you’re right about the order of magnitude, and that’s good enough for a Fermi estimate.
Yes, that’s the main place I’m still uncertain, the ten combinations of three 1’s have to be statistically independent which I’m having trouble visualizing; if you rolled six die, the chance that either three pre-selected specific die would be 1’s or the other three die would all be 1’s could just be added together.
But since you have five die, and you are asking whether three of them will be 1’s, or another overlapping set will be 1’s, you have to somehow get these to be statistically independent. Part of that is actually what I left out (that GPT told me, so not sure but sounds sensible), you take the chance that the other two leftover die will both not be 1’s; there’s a 9⁄10 chance that each will not be a 1, so .81 chance that both will not be ones, and you actually have to multiply this .81 by the 1/1000 for each set of three 1’s. So that slightly lowes that part of the estimate to (1/100010).81=.81%
So you have excluded the extra 1’s from the sets of three 1’s but then you have to do the same calculation for the sets of four 1’s and the one set of five 1’s. The set a five 1’s is actually very easy, there’s a 1⁄10 chance that each will land on one, so all of them together is 10^5=1/100,000, adding only .001% to the final calculation, and the four 1’s are also about a factor of 5 less likely then three 1’s because you have to roll another 1 to get four 1’s. So you have to roll four 1’s and one not-1, or (1/10,000).95=.045%
.81+.001+.045=.856%
Still not 100% sure because I suck at combinatorials but this seems pretty likely to be correct. Mainly going off that 1⁄1,000 intuition for any three sets of 1’s and that being repeated ~10 times because there are five die, and the rest sounds sensible