Here’s a “sideways cumulative density function”, showing all guesses from lowest to highest:
There were a lot of guesses of “500” but that might just be because 500 is a nice round number. There were more people guessing within 50 of 508M (165) than in the 100-wide regions immediately above or below (126 within 50 of 408, 88 within 50 of 608) and more people guessing within 50 of 739 (107) than in the 100-wide regions immediately above or below (91 within 50 of 639, 85 within 50 of 839).
Here’s a histogram that shows this, but in order to actually see a dip between the 508ish numbers and 739ish numbers the bucketing needs to group those into separate categories with another category in between, so I don’t trust this very much:
If someone knows how to make an actual probability density function chart that would be better, because it wouldn’t be sensitive to these arbitrary divisions on where to place the histogram boundaries.
It looks plausibly bimodal, though one might want to construct a suitable hypothesis test for unimodality versus multimodality. Unfortunately, as you noted, we cannot distinguish between the hypothesis that the bimodality is due to rounding (at 500 M) versus the hypothesis that the bimodality is due to ambiguity between Europe and the EU. This holds even if a hypothesis test rejects a unimodal model, but if anyone is still interested in testing for unimodality, I suggest considering Efron and Tibshirani’s approach using the bootstrap.
Edit: Updated the plot. I switched from adaptive bandwidth to fixed bandwidth (because it seems to achieve higher efficiency), so parts of what I wrote below are no longer relevant—I’ve put these parts in square brackets.
Plot notes: [The adaptive bandwidth was achieved with Mathematica’s built-in “Adaptive” option for SmoothKernelDistribution, which is horribly documented; I think it uses the same algorithm as ‘akj’ in R’s quantreg package.] A Gaussian kernel was used with the bandwidth set according to Silverman’s rule-of-thumb [and the sensitivity (‘alpha’ in akj’s documentation) set to 0.5]. The bootstrap confidence intervals are “biased and unaccelerated” because I don’t (yet) understand how bias-corrected and accelerated bootstrap confidence intervals work. Tick marks on the x-axis represent the actual data with a slight jitter added to each point.
I’ve cleaned up the data and put it here.
Here’s a “sideways cumulative density function”, showing all guesses from lowest to highest:
There were a lot of guesses of “500” but that might just be because 500 is a nice round number. There were more people guessing within 50 of 508M (165) than in the 100-wide regions immediately above or below (126 within 50 of 408, 88 within 50 of 608) and more people guessing within 50 of 739 (107) than in the 100-wide regions immediately above or below (91 within 50 of 639, 85 within 50 of 839).
Here’s a histogram that shows this, but in order to actually see a dip between the 508ish numbers and 739ish numbers the bucketing needs to group those into separate categories with another category in between, so I don’t trust this very much:
If someone knows how to make an actual probability density function chart that would be better, because it wouldn’t be sensitive to these arbitrary divisions on where to place the histogram boundaries.
Here is a kernel density estimate of the “true” distribution, with bootstrapped) pointwise 95% confidence bands from 999 resamples:
It looks plausibly bimodal, though one might want to construct a suitable hypothesis test for unimodality versus multimodality. Unfortunately, as you noted, we cannot distinguish between the hypothesis that the bimodality is due to rounding (at 500 M) versus the hypothesis that the bimodality is due to ambiguity between Europe and the EU. This holds even if a hypothesis test rejects a unimodal model, but if anyone is still interested in testing for unimodality, I suggest considering Efron and Tibshirani’s approach using the bootstrap.
Edit: Updated the plot. I switched from adaptive bandwidth to fixed bandwidth (because it seems to achieve higher efficiency), so parts of what I wrote below are no longer relevant—I’ve put these parts in square brackets.
Plot notes: [The adaptive bandwidth was achieved with Mathematica’s built-in “Adaptive” option for SmoothKernelDistribution, which is horribly documented; I think it uses the same algorithm as ‘akj’ in R’s quantreg package.] A Gaussian kernel was used with the bandwidth set according to Silverman’s rule-of-thumb [and the sensitivity (‘alpha’ in akj’s documentation) set to 0.5]. The bootstrap confidence intervals are “biased and unaccelerated” because I don’t (yet) understand how bias-corrected and accelerated bootstrap confidence intervals work. Tick marks on the x-axis represent the actual data with a slight jitter added to each point.