Thanks for doing what I should have done and actually run some data!
I ran your code in R. I think what is going on in the Cauchy case is that the variance on fac is way higher than the normal noise being added (I think the SD is set to 1 by default, whilst the Cauchy is ranging over some orders of magnitude). If you plot(fac, out), you get a virtually straight line, which might explain the lack of divergence between top ranked fac and out.
I don’t have any analytic results to offer, but playing with R suggests in the normal case the probability of the greatest factor score picking out the greatest outcome goes down as N increases—to see this for yourself, replace rcauchy with runf or rnorm, and increase the N to 10000 or 100000. In the normal case, it is still unlikely that max(fax) picks out max(out) with random noise, but this probability seems to be sample size invariant—the rank of the maximum factor remains in the same sort of percentile as you increase the sample size.
I can intuit why this is the case: in the bivariate normal case, the distribution should be elliptical, and so the limit case with N → infinity will be steadily reducing density of observations moving out from the ellipse. So as N increases, you are more likely to ‘fill in’ the bulges on the ellipse at the right tail that gives you the divergence, if the N is smaller, this is less likely. (I find the uniform result more confusing—the ‘N to infinity case’ should be a parallelogram, so you should just be picking out the top right corner, so I’d guess the probability of picking out the max factor might be invariant to sample size… not sure.)
Thanks for doing what I should have done and actually run some data!
I ran your code in R. I think what is going on in the Cauchy case is that the variance on fac is way higher than the normal noise being added (I think the SD is set to 1 by default, whilst the Cauchy is ranging over some orders of magnitude). If you plot(fac, out), you get a virtually straight line, which might explain the lack of divergence between top ranked fac and out.
I don’t have any analytic results to offer, but playing with R suggests in the normal case the probability of the greatest factor score picking out the greatest outcome goes down as N increases—to see this for yourself, replace rcauchy with runf or rnorm, and increase the N to 10000 or 100000. In the normal case, it is still unlikely that max(fax) picks out max(out) with random noise, but this probability seems to be sample size invariant—the rank of the maximum factor remains in the same sort of percentile as you increase the sample size.
I can intuit why this is the case: in the bivariate normal case, the distribution should be elliptical, and so the limit case with N → infinity will be steadily reducing density of observations moving out from the ellipse. So as N increases, you are more likely to ‘fill in’ the bulges on the ellipse at the right tail that gives you the divergence, if the N is smaller, this is less likely. (I find the uniform result more confusing—the ‘N to infinity case’ should be a parallelogram, so you should just be picking out the top right corner, so I’d guess the probability of picking out the max factor might be invariant to sample size… not sure.)