Not quite. Expected value is linear but doesn’t commute with multiplication. Since the Drake equation is pure multiplication then you could use point estimates of the means in log space and sum those to get the mean in log space of the result, but even then you’d *only* have the mean of the result, whereas what would really be a “paradox” is if P(N=1|N≠0) turned out to be tiny.
You don’t need any correlation between X and Y to have E[XY]≠E[X]E[Y] . Suppose both variables are 1 with probability .5 and 2 with probability .5; then their mean is 1.5, but the mean of their products is 2.25.
Indeed, each has a mean of 1.5; so the product of their means is 2.25, which equals the mean of their product. We do in fact have E[XY]=E[X]E[Y] in this case. More generally we have this iff X and Y are uncorrelated, because, well, that’s just how “uncorrelated” in the technical sense is defined. I mean if you really want to get into fundamentals, E[XY]-E[X]E[Y] is not really the most fundamental definition of covariance, I’d say, but it’s easily seen to be equivalent. And then of course either way you have to show that independent implies uncorrelated. (And then I guess you have to do the analogues for more than two, but...)
Not quite. Expected value is linear but doesn’t commute with multiplication. Since the Drake equation is pure multiplication then you could use point estimates of the means in log space and sum those to get the mean in log space of the result, but even then you’d *only* have the mean of the result, whereas what would really be a “paradox” is if P(N=1|N≠0) turned out to be tiny.
The authors grant Drake’s assumption that everything is uncorrelated, though.
You don’t need any correlation between X and Y to have E[XY]≠E[X]E[Y] . Suppose both variables are 1 with probability .5 and 2 with probability .5; then their mean is 1.5, but the mean of their products is 2.25.
Indeed, each has a mean of 1.5; so the product of their means is 2.25, which equals the mean of their product. We do in fact have E[XY]=E[X]E[Y] in this case. More generally we have this iff X and Y are uncorrelated, because, well, that’s just how “uncorrelated” in the technical sense is defined. I mean if you really want to get into fundamentals, E[XY]-E[X]E[Y] is not really the most fundamental definition of covariance, I’d say, but it’s easily seen to be equivalent. And then of course either way you have to show that independent implies uncorrelated. (And then I guess you have to do the analogues for more than two, but...)
Gah, of course you’re correct. I can’t imagine how I got so confused but thank you for the correction.