unless you additionally posit an additional mechanism like fields with terrible replication rates have a higher standard deviation than fields without them
If the means/medians are higher, the tails are also higher as well (usually).
Norm(μ=115, σ=15) distribution will have a much lower proportion of data points above 150 than Norm(μ=130, σ=15). Same argument for other realistic distributions. So if all I know about fields A and B is that B has a much lower mean than A, by default I’d also assume B has a much lower 99th percentile than A, and much lower percentage of people above some “genius” cutoff.
Oh I see, you mean that the observation is weak evidence for the median model relative to a model in which the most competent researchers mostly determine memeticity, because higher median usually means higher tails. I think you’re right, good catch.
Why would that be relevant?
If the means/medians are higher, the tails are also higher as well (usually).
Norm(μ=115, σ=15) distribution will have a much lower proportion of data points above 150 than Norm(μ=130, σ=15). Same argument for other realistic distributions. So if all I know about fields A and B is that B has a much lower mean than A, by default I’d also assume B has a much lower 99th percentile than A, and much lower percentage of people above some “genius” cutoff.
Oh I see, you mean that the observation is weak evidence for the median model relative to a model in which the most competent researchers mostly determine memeticity, because higher median usually means higher tails. I think you’re right, good catch.