This is related to the difference between normal distributions and log-normal/power law distributions, where the difference is in a normal distribution, the tails are very thin and thus outliers don’t matter at all. In particular, the CLT lets us get normal distributions when we have a large collection of small effects added up, which is probably the case in genetics, but for log-normal/power law distributions, only a few outlier data-points matter at all.
This maps to polycausal vs monocausal distinctions reasonably well.
Indeed, intuitions about which of the 2 patterns/3 distributions is most common or explanatory is likely a huge crux under a whole lot of topics.
Per the missing heritability problem, it’s not even clear genetics works like this, and it’s hard to come up with anything where the case for it working like this is better than for genetics.
I’ll get more into some of this stuff in later posts I think.
Alright, I want to see your take on how the missing heritability problem blocks massively polycausal/normal distributions from being the dominant factor in human traits.
Agree that in other areas, things are more monocausal than genetics/human traits.
First, it should be noted that human traits are usually lognormally distributed, with apparent normal distributions being an artifact. E.g. while IQ is normally distributed, per item response theory it has an exponential relationship to the likelihood of success at difficult tasks. E.g. Most of What You Read on the Internet is Written by Insane People. Etc. So it’s not really about normal distribution vs lognormal distributions, it’s about linear diffusion of lognormals vs exponential interaction[1] of normals[2].
There’s some different solutions to the missing heritability problem. One proposal is rare variants, since they aren’t picked up by most sequencing technology, but the rarer the variant, the larger the effect size can be, so that makes the rare variants end up as our “sparse lognormals”.
But let’s say rare variants are of negligible size, so they don’t give us linear diffusion of lognormals, and instead the longtailedness of human traits is due to some sort of exponential interaction.
Then another thing that could give us missing heritability is if apparent traits aren’t actually the true genetic traits, but rather the true genetic traits trigger some dynamics, with e.g. the largest dynamics dominating, and (the logarithm of) those dynamics are what we end up measuring as traits. But that’s just linear diffusion of sparse lognormals on a phenotypic level of analysis.
This is related to the difference between normal distributions and log-normal/power law distributions, where the difference is in a normal distribution, the tails are very thin and thus outliers don’t matter at all. In particular, the CLT lets us get normal distributions when we have a large collection of small effects added up, which is probably the case in genetics, but for log-normal/power law distributions, only a few outlier data-points matter at all.
This maps to polycausal vs monocausal distinctions reasonably well.
Indeed, intuitions about which of the 2 patterns/3 distributions is most common or explanatory is likely a huge crux under a whole lot of topics.
Per the missing heritability problem, it’s not even clear genetics works like this, and it’s hard to come up with anything where the case for it working like this is better than for genetics.
I’ll get more into some of this stuff in later posts I think.
Alright, I want to see your take on how the missing heritability problem blocks massively polycausal/normal distributions from being the dominant factor in human traits.
Agree that in other areas, things are more monocausal than genetics/human traits.
First, it should be noted that human traits are usually lognormally distributed, with apparent normal distributions being an artifact. E.g. while IQ is normally distributed, per item response theory it has an exponential relationship to the likelihood of success at difficult tasks. E.g. Most of What You Read on the Internet is Written by Insane People. Etc. So it’s not really about normal distribution vs lognormal distributions, it’s about linear diffusion of lognormals vs exponential interaction[1] of normals[2].
There’s some different solutions to the missing heritability problem. One proposal is rare variants, since they aren’t picked up by most sequencing technology, but the rarer the variant, the larger the effect size can be, so that makes the rare variants end up as our “sparse lognormals”.
But let’s say rare variants are of negligible size, so they don’t give us linear diffusion of lognormals, and instead the longtailedness of human traits is due to some sort of exponential interaction.
Then another thing that could give us missing heritability is if apparent traits aren’t actually the true genetic traits, but rather the true genetic traits trigger some dynamics, with e.g. the largest dynamics dominating, and (the logarithm of) those dynamics are what we end up measuring as traits. But that’s just linear diffusion of sparse lognormals on a phenotypic level of analysis.
As in exp(∑iβixi)
Or, well, short-tailed variables; e.g. alleles are usually modelled as Bernoulli.