I’m not sure I understand your question, but eliminating the left tail of a bell curve would change the average but not necessarily extend the right tail.
What exactly happens depends on the model, but I think it would be very difficult to build a model with nonzero heritability that produced a bell curve and where truncating the left tail did not affect the right tail.
Usually bell curves occur from the sum of many small discrete variables. That appears to be true for IQ. Under this model, any form of selection has basically the same effect, at least in the long term. If the old equilibrium had random mating and the next generation is also produced by random mating, then a new bell curve will be produced in the very next generation. If the old distribution were due to assortative mating, and that continues, it will take longer to reach equilibrium. But it will affect the right tail eventually.
What exactly happens depends on the model, but I think it would be very difficult to build a model with nonzero heritability that produced a bell curve and where truncating the left tail did not affect the right tail.
Well, since IQ is forced to be a bell curve by definition, the fact that it is a bell curve doesn’t count as evidence for anything.
IQ tests are normalized (so they have a median of 100 and standard deviation of 15, but they are not forced to be normally distributed), so I think the distributional properties can be evidence for something.
I think you are mistaken and they simply are forced to be bell curves.
But even if IQ is an affine transformation of the number of questions answered correctly, the simple act of adding up the questions is likely to produce a bell curve, so its appearance is not much evidence.
I confirm that IQ tests are forced to be bell curves; at least those using the methodology I learned at university.
Calibrating the test (giving it to many people) returns information like: “50% of test subjects can solve 23 problems of these 50” and “98% of test subjects can solve 41 problems of these 50″.
Then the next step is to put these data in the bell curve, saying: “therefore 23⁄50 means 0 sigma = 100 IQ” and “therefore 41⁄50 means 2 sigma = 130 IQ”.
But you can’t assume that this is linear. To explain it simply, let’s assume that the more intelligent person always solves a superset of the problems the less intelligent person solved. Therefore, any person with IQ between 100 and 130 would solve all the 23 “easy” problems, some of the 18 “hard” problems, and none of the 9 “impossible” problems. But how many exactly—that depends on how difficult exactly those “hard” problems are. Maybe they are relatively easy, and a person with IQ 115 will solve all of them; and maybe they are relatively hard, and a person with IQ 115 will solve none of them. But that is a fact about the test, not about the intelligence distribution of the population. Therefore this fact should be removed in the normalization.
Then the next step is to put these data in the bell curve, saying: “therefore 23⁄50 means 0 sigma = 100 IQ” and “therefore 41⁄50 means 2 sigma = 130 IQ”.
This is NOT forcing the outcome to be a bell curve. This is just normalizing to a given mean and standard deviation, a linear operation that does not change the shape of the distribution.
Consider a hypothetical case where an IQ test consists of 100 questions and 100 people take it. These hundred people all get a different number of questions correct—from 1 to 100: the distribution of the number of correct answers is flat or uniform over [1 .. 100]. Now you normalize the mean to 100 and one standard deviation to 15 -- and yet the distribution remains flat and does not magically become a bell curve.
These hundred people all get a different number of questions correct—from 1 to 100: the distribution of the number of correct answers is flat or uniform over [1 .. 100].
This is a fact about the test.
Now you normalize the mean to 100 and one standard deviation to 15 -- and yet the distribution remains flat and does not magically become a bell curve.
Maybe it was wrong for me to use the word “normalization” in this context, but no, the distribution of raw scores is not mapped linearly to the distribution of IQs. It is mapped onto the bell curve.
Otherwise every intelligence test would produce a different intelligence curve, because inventing 100 questions such that they get the same distribution of raw scores as some other set of 100 questions, that would be an impossible task. (Just try to imagine how you would try to obtain the set of 100 questions for which the distribution of raw scores is linear. Keep in mind that every testing on many real subjects costs you a lot of money, and on a few subjects you won’t get statistical significance.)
the distribution of raw scores is not mapped linearly to the distribution of IQs. It is mapped onto the bell curve.
Could you provide links showing this to be the case?
because inventing 100 questions such that they get the same distribution of raw scores as some other set of 100 questions, that would be an impossible task.
Not exactly Gaussian—that’s even theoretically impossible because a Gaussian has infinitely long tails—but approximately Gaussian. Bell-shaped, in other words.
An IQ test in which the scores are only normalized linearly is a worse approximation to a Gaussian distribution than one which is intentionally designed to give Gaussianly distributed scores.
Therefore this fact should be removed in the normalization.
Perhaps, but it doesn’t follow that the new normalization should be Gaussian. One test I’d like to see is what happens when you give a test calibrated for one population to a different one.
If the test is normalized for a population A, then if we give it to a population B, the results don’t have to be Gaussian. The normalization occurs only once, when the relationship between the raw scores and the IQ values is defined. Later the existing definition can be reused.
You would get somewhat different shape when you a) calibrate the test for population A and then measure population B, or b) calibrate the test for A+B and then measure population B.
Probably the most correct way to compare two populations would be to skip the normalization step and just compare the histograms of raw scores for both populations. (I am not good enough in math to say how exactly.)
Also, I am not sure how much such comparison would depend on the specific test. Let’s imagine that we have one population with average IQ 100 and other population with average IQ 120. If we give them a test consisting of IQ-110-hard questions, the two populations will probably seem more different than if we give them a test consisting of a mix of IQ-80-hard and IQ-140-hard questions.
Also, I am not sure how much such comparison would depend on the specific test. Let’s imagine that we have one population with average IQ 100 and other population with average IQ 120. If we give them a test consisting of IQ-110-hard questions, the two populations will probably seem more different than if we give them a test consisting of a mix of IQ-80-hard and IQ-140-hard questions.
You can compare by looking at which percentile of population B, the median of population A corresponds to.
Edit: also once you’ve compared several populations this way, you can try to see if there is a way to normalize the test such that the distributions for all the populations have similar shapes.
Oh, yeah. But I think It is probably true that it is difficult to build a model of a continuous trait in which truncation of one tail does not affect the equilibrium of the other tail.
The more relevant point is additive heritability (aka h^2 or narrow sense heritability. Any model will have some, so my condition of having any is not helpful. But if a trait has a lot, that means the trait is pretty close to counting genes, hence the distribution must be a bell curve. But that doesn’t mean that it is a constraint on models.
The more relevant point is additive heritability (aka h^2 or narrow sense heritability.
Not all traits are additively heritable, e.g., the malaria protection/sickle cell anemia gene, and in particular its not obvious that intelligence is additively heritable. One theory I’ve heard is that things like autism are a result of having too many “intelligence genes”.
Even in the most extreme case of dominance, where H^2 greatly diverges from h^2, the additive heritability is not zero. (But if you had a trait in which heterozygotes were distinguishable from homozygotes, but the two types of homozygotes were not distinguishable, then h^2=0. I know of no such trait.)
Here’s a short-term analysis that may be more convincing.
I assume perfect heritability and pm’s choice of 50% selection, both to make the effects larger. I assume additive genetics because that’s what we expect from the assumption of a bell curve. The far right tail is largely produced from two parents both on the right half, even on the tail. The farther right you go, the more true this is. Assuming mating is at random. For each person who could have a right tail child, if only they found the right mate, eliminating half of the population that wouldn’t do doubles their odds of having an appropriate mate and thus a right tail child. Thus, the right tail is twice as big. The further out we go, the closer it is to twice as big. If everyone has twice as many children to make up for the population being cut in half, then the tail is four times as big.
If there is strong assortative mating, the people on the right tail weren’t going to going to have children with the left half and the first effect doesn’t apply, since the selection only eliminates pairings that weren’t going to happen. Indeed, assortative mating is very similar to truncation selection, so combining the two is redundant in the first generation.
In the first generation, the left tail does not look at all gaussian. In the long term, it does become gaussian. In the short term right becomes a thicker tail, but in the long term the variance has gone down and the right tail becomes smaller, starting at two standard deviations from the original mean.
If you did that then after one or two generations, regression to the mean would set the average IQ right back to where it was (almost). If you eliminated enough of the left tail over several generations to actually change the average to a stable higher value, then the right tail would be extended.
Like I said I’m not commenting on the effect of the Holocaust because I don’t know anything about it.
If UberHitler kills everyone with IQ<100, that raises the average IQ without increasing the number of people with high IQ. After a few generations, you are back to a Gaussian with a smaller variance (you lost some genetic diversity) and a slightly larger mean, which means that at some IQ level that is sufficiently high you have fewer people with that IQ .
If you have a population with an average IQ of 100 and you add in an equal number of people with an IQ of 80 then after a generation, you will have a Gaussian with a larger variance. Hence there will be more geniuses due to more genetic variation.
Surely you don’t believe that? I realize that this isn’t a perfect reversal but that sounds very odd to me.
Anyway here is the crude model of intelligence that I working with—I admit I’m not an expert on this topic, and I have some reading up to do on the genetic basis of intelligence. Intelligence is a polygenetic trait that can be roughly (very roughly) modeled as a bunch of genetic sites with either a plus or minus alleles (keeping it simple with just 2 possibilities). The more plus alleles you have the more likely you are to have a high IQ (genes and intelligence aren’t perfectly correlated). Populations with a higher average IQ have a higher concentration of plus alleles so the chance of receiving many of them is increased. But if you take away all of the people who due to bad luck received a very large number of minus alleles, you haven’t altered the concentration of alleles in the gene pool that much—this is part of why regression to the mean occurs. But if you consistently select for people with a higher concentration of plus alleles, then the odds of any one child having a lot of plus alleles increases in the population. This is how artificial selection occurs in any trait that is polygenetic. Corn kernels are huge because the people who cultivated corn selected for the biggest corn kernels—yes there was a loss of genetic diversity and yes there was decrease in the variance, but that nevertheless what was observed were corn kernels that were bigger than any corn before.
Surely you don’t believe that? I realize that this isn’t a perfect reversal but that sounds very odd to me.
It would happen in your model, if there is no perfect overlap between the set of sites in one population and the set of sites in the other population. With two populations, you have more sites. The smartest possible mega-genius is from the mixed population and has + alleles on each site; none of the original populations can have a genius this smart at all.
To see that on less extreme rarity (and approximately for a large number of alleles), write down the ratio of two Gaussians with different means and variances. Simplify. Observe that the ratio of the larger variance Gaussian to the smaller variance Gaussian gets arbitrarily high far from the mean.
Okay but that is an incredibly weak claim—I’m not interested in switching all of the plus alleles on because additivity starts to break down and having an IQ of say 500 isn’t particularly meaningful. For any reasonable definition of genius, artificially selecting for the smartest members of a population (what super-Hitler is doing), will increase the number of them.
Assume total heritability, random mating, additive genetics, and a single 50% truncation event. In the first generation, the right tail becomes 4x larger as a proportion of the population, but it gets smaller in equilibrium. The new mean is 0.8 standard deviations above the old mean. The new standard deviation is 0.6 times the old one. When it reaches equilibrium and becomes a Gaussian with those parameters, the crossover where the old population had a thicker tail than the new is about two standard deviations. At three standard deviations, the new distribution is only 1⁄10 of the old distribution. But I don’t know how much time it takes to get there.
Thank you, I’m pretty surprised by that result. Two questions: does assortive mating merely slow down that process? And is there any way to increase the both the average and the standard deviation?
You need new mutations to increase the standard deviation, that takes a lot of time and a big population size.
Also, having a genetic disorder applies larger selection pressure to the other genes.
If we are to think of some real ‘eugenic’ population bottleneck, such as WW2 related, the correlation between intelligence and survival is, frankly, shit, plus a lot of small, geographically co-located sub-populations where a bunch of beneficial genes have been slowly increasing in prevalence get completely wiped out, with loss of all copies of that gene.
Bottom line is, selective breeding of larger corn kernels works quickly because the nature hasn’t been breeding for larger corn kernels to begin with, it has been breeding optimum kernel sizes, and to get large kernels you’re just selecting genetic disorders. There’s nothing that you can wreck about the brain that would turn you into a genius, there’s a plenty of things you can wreck about growth that would make corn kernels big.
It seems to me that this would work much better for traits that can be accomplished through loss of function (e.g. larger corn kernels, through loss of function of regulator genes) than in general. At too high mutation rate, complex functionality can’t be preserved.
One thing to keep in mind eugenics wise is that pretty much all the breeding methods we employ for other species are dysgenic—we are producing cripples to our own benefit or amusement. Damage this, damage that, select this bad gene, that bad gene, and you get yourself docile floppy eared dog with the IQ equivalent of severe mental retardation, compared to a wolf.
One thing to keep in mind eugenics wise is that pretty much all the breeding methods we employ for other species are dysgenic—we are producing cripples to our own benefit or amusement.
I assume by ‘dysgenic’ you mean ‘less fit than unbred specimens for reproductive fitness in the wild’. (You couldn’t mean ‘reproductive fitness’ in general, given how many dogs there are compared to how many wolves there are now.)
This seems like an odd point to make. Of course we breed animals to be less-reproductively-fit-in-the-wild—if they were already ideal for our multifarous purposes, why would we be explicitly breeding them at all? (If they were already ideal for eating or being pets or whatever, we would simply capture & use them or raise them normally without any interference in their reproduction.)
It’d be a pointless point if there was a symmetry between fitness in the wild and fitness for our purpose. There isn’t—fitness in the wild is very seldom improved by loss-of-function mutations, whereas fitness for our purposes, starting from the species that have been evolving for fitness in the wild, very often is. Rapid success at breeding larger corn kernels is not going to generalize into rapid success at breeding ubermensch.
fitness in the wild is very seldom improved by loss-of-function mutations, whereas fitness for our purposes, starting from the species that have been evolving for fitness in the wild, very often is.
There’s no reason evolution would have already have optimized for all the intelligence-related alleles; if it had, they would have reached fixation.
Rapid success at breeding larger corn kernels is not going to generalize into rapid success at breeding ubermensch.
I think it is. All the genetic data seems to point to: much of intelligence is genetic, highly polygenic, not fixated, and additive. All of that translates to breedability: we have a lot of easily identified variants present in only parts of the population; hence, breedable.
There’s no question that evolution can continue. The issue is that the rate you can attain for different traits differ. For example, evolving smaller animals from larger animals (by a given factor) is an order of magnitude faster process than evolving larger animals from smaller animals. ( http://news.ucsc.edu/2012/01/body-size.html ). I think you wouldn’t disagree that it would be far quicker to breed a 50 point IQ drop than 50 point IQ rise?
we have a lot of easily identified variants present in only parts of the population
I guess you refer to those studies on intelligence genes which flood the popular media, which tend to have small effect sizes and are of exactly the kind that is very prone to superfluous results.
For example, evolving smaller animals from larger animals (by a given factor) is an order of magnitude faster process than evolving larger animals from smaller animals. ( http://news.ucsc.edu/2012/01/body-size.html ). I think you wouldn’t disagree that it would be far quicker to breed a 50 point IQ drop than 50 point IQ rise?
But what does that have to do with breeding for our objective purpose? It may be easier to destroy functionality than create it, but evolution is creating functionality for living in the wild and doing something like hunting mice while we’re interesting in creating functionality to do something like understand human social cues and trading off against things like aggression and hostility towards the unknown. In both cases, functionality is being created and trading off against something else, and there’s no reason to expect the change for one case to be beneficial for the other. Border collies may be geniuses at memorizing words and herding sheep and both of these feats required intense selection, but both skills are worse than useless for surviving in the wild as a wolf...
I guess you refer to those studies on intelligence genes which flood the popular media, which tend to have small effect sizes and are of exactly the kind that is very prone to superfluous results.
The original studies, yes, the ones like candidate-gene studies where n rarely is more than a few hundred, but the ones using proper sample sizes like n>50000 and genome-wide significance level seem trustworthy to me. They seem to be replicating.
Well, my point was that you can’t expect the same rate of advances from some IQ breeding programme that we get when breeding traits arising via loss-of-function mutations.
Sure, there’s a huge genetic component, but almost none of it is “easily identified”.
Generally you can expect that parameters such as e.g. initial receptor density at a specific kind of synapse would be influenced by multiple genes and have an optimum, where either higher or lower value is sub-optimal. So you can easily get one of the shapes from the bottom row in
i.e. little or no correlation between IQ and that parameter (and little or no correlation between IQ and any one of the many genes influencing said parameter).
edit: that is to say, for example if we have an allele which slightly increases number of receptors on a synapse between some neuron type A and some neuron type B, that can either increase or decrease the intelligence depending on whenever the activation of Bs by As would be too low or too high otherwise (as determined by all the other genes). So this allele affects intelligence, sure, but not in a simple easy to detect way.
There isn’t—fitness in the wild is very seldom improved by loss-of-function mutations
I am not sure this is generally true.
The wild equivalent to “fitness for our purposes” is a drastic change in the environment which starts to select for different criteria. In such conditions organisms certainly select for new-useful-function mutations, but they also select for loss-of-no-longer-useful-function mutations. Functionality tends to be expensive (e.g. in energy) and if you don’t need it, you’re better off discarding it.
Those drastic changes rarely happen, though. In humans, the most recent very well known one was adult lactose tolerance—something that switched lactase off in adulthood no longer does.
edit: and somewhat back to the original point with regards to eugenics—humans have been evolving intelligence for a while already, so selection for intelligence doesn’t seem like a dramatic change.
That, by the way, is an interesting example of both adding functionality (now adults can drink milk!) and losing functionality (the gene which turns off lactase production in adulthood got broken and no longer works in many people).
Yeah. Anyhow, my original point has to do with attempts to breed humans for intelligence. Humans have been evolving for greater intelligence for a very long time now, any free easy gains already been made. You could probably get larger brain volume rather easily with birth by caesarian only, but that doesn’t seem like a good idea to me.
Humans have been evolving for greater intelligence for a very long time now, any free easy gains already been made.
I don’t know. It may or may not be true, but it doesn’t look obvious to me.
The issue is that “evolving for greater intelligence” competes with other things like “evolving for greater strength” or “evolving for greater alpha-ness” or maybe even simply “evolving to survive famines”.
Because of TANSTAAFL greater intelligence comes at a cost (as a trivial example, the human brain consumes a LOT of energy) and the trade-offs the evolution makes are appropriate for the then-current environment. And our current environment is markedly different (there’s your drastic change) from the one in which modern humans actually evolved.
It is quite possible that some trade-offs which held down the growth of intelligence are no longer operational and humans can/will continue to evolve towards even higher IQ.
Practically, of course, the point is moot as evolution is very very slow and humans will self-modify much more rapidly than evolution could provide any noticeable gains.
It is quite possible that some trade-offs which held down the growth of intelligence are no longer operational and humans can/will continue to evolve towards even higher IQ.
Maybe, but as you say, it would come at potential cost. E.g. gain of a few points but you won’t survive famine, that doesn’t sound very good.
Or much more insidiously, gains on an IQ test, at the expense of ability to form/organize/use complex background knowledge (IQ tests are designed to be minimally affected by extra background knowledge).
Practically, of course, the point is moot as evolution is very very slow and humans will self-modify much more rapidly than evolution could provide any noticeable gains.
Yeah, either that, or the civilization goes kaput and it’s back to all-natural selection.
Damage this, damage that, select this bad gene, that bad gene, and you get yourself docile floppy eared dog with the IQ equivalent of severe mental retardation, compared to a wolf.
Many breeds of dogs are certainly very dim compared to wolves, but I’m not so sure that some aren’t just as intelligent, perhaps more so. It can be difficult to evaluate the relative intelligence of dogs and wolves, because some of the hallmarks by which we measure the most intelligent dogs (such as the complexity of tasks they can be trained to perform) do not apply to wolves because they’re so much less cooperative.
Considering the intellectual tasks the smarter breeds of dogs are capable of though, I wouldn’t rule out the possibility of eugenic selection for intelligence relative to wolves, for e.g. border collies, standard poodles and such.
Thing is, of possible mutations within any gene (coding for a protein), vast majority cause loss of it’s original function. This makes the speed of evolution dramatically dependent to the specific details of how the change is accomplished.
Brain volume isn’t necessarily a very good proxy, some animals are significantly smarter than other animals which have larger brains. Rats, for instance, may be more intelligent than some animals which are capable of eating rats, and have much larger brains due to greater body volume.
The vast majority of the difference between dogs and wolves isn’t due to mutation, but selective concentration of genes which already existed within the grey wolf gene pool.
The vast majority of the difference between dogs and wolves isn’t due to mutation, but selective concentration of genes which already existed within the grey wolf gene pool.
As far as I remember, dogs are NOT domesticated wolves. Dogs and wolves have a common ancestor, but they diverged quite a while ago, possibly even before domestication. I vaguely recall that jackals were also somehow involved in dog ancestry.
The common ancestor of dogs and gray wolves, while perhaps having some differences with modern wolves, was still a gray wolf, and this is supported by the paper you linked below. While it’s true that modern gray wolves have less diversity than ancestral ones, what Desrtopa said is also correct.
I think this is incorrect, the most recent source I’ve read on the subject indicated that nearly the entire gene diversity out of all breeds of dogs is just a subset of the gene diversity that already existed in grey wolves.
The success in developing tame silver foxes with only a few generations of selective breeding suggests that domestic traits can be bred into canines without additional mutation just by imposing selection effects to sort for genes already existing within their population.
To identify genetic changes underlying dog domestication and reconstruct their early evolutionary history, we generated high-quality genome sequences from three gray wolves, one from each of the three putative centers of dog domestication, two basal dog lineages (Basenji and Dingo) and a golden jackal as an outgroup. Analysis of these sequences supports a demographic model in which dogs and wolves diverged through a dynamic process involving population bottlenecks in both lineages and post-divergence gene flow. In dogs, the domestication bottleneck involved at least a 16-fold reduction in population size, a much more severe bottleneck than estimated previously. A sharp bottleneck in wolves occurred soon after their divergence from dogs, implying that the pool of diversity from which dogs arose was substantially larger than represented by modern wolf populations. We narrow the plausible range for the date of initial dog domestication to an interval spanning 11–16 thousand years ago, predating the rise of agriculture.
If you truncate less of the tail, it takes more generations to move the mean, but I believe that by the time it moves the same distance, the variance shrinks less.
If you have a randomly mating population, apply assortative mating for a few generations, apply one generation of selection, and let randomly mix, it costs less variance for the same mean as if you don’t do assortative mating. That’s because assortative mating is a kind of selection, so this is like several generations of selection. If you start and end with an equilibrium of assortative mating, I’m not sure what happens. Also, assortative mating increases the variance, so you have to distinguish between the variance of the population and the variance of the population that would result if you switched to random mating.
I made a weak claim (all sites) to make it easier for you to see how that works within your own additive model. Of course, you don’t have to have plus alleles on all locations for a genius to be more common in the mixed population than in the original populations.
For any reasonable definition of genius (someone with an IQ of 160+), artificially selecting for the smartest members of a population (what super-Hitler is doing), will increase the number of them.
This would depend on the population sizes involved, number of locations, and overlap between locations.
The average increased, that’s your evolution. If you let many generations pass, for the mutations to happen and genetic diversity to restore, you will get the variance back as well.
I’m not sure I understand your question, but eliminating the left tail of a bell curve would change the average but not necessarily extend the right tail.
What exactly happens depends on the model, but I think it would be very difficult to build a model with nonzero heritability that produced a bell curve and where truncating the left tail did not affect the right tail.
Usually bell curves occur from the sum of many small discrete variables. That appears to be true for IQ. Under this model, any form of selection has basically the same effect, at least in the long term. If the old equilibrium had random mating and the next generation is also produced by random mating, then a new bell curve will be produced in the very next generation. If the old distribution were due to assortative mating, and that continues, it will take longer to reach equilibrium. But it will affect the right tail eventually.
Added: no, more than a generation to equilibrium.
Well, since IQ is forced to be a bell curve by definition, the fact that it is a bell curve doesn’t count as evidence for anything.
IQ tests are normalized (so they have a median of 100 and standard deviation of 15, but they are not forced to be normally distributed), so I think the distributional properties can be evidence for something.
I think you are mistaken and they simply are forced to be bell curves.
But even if IQ is an affine transformation of the number of questions answered correctly, the simple act of adding up the questions is likely to produce a bell curve, so its appearance is not much evidence.
I confirm that IQ tests are forced to be bell curves; at least those using the methodology I learned at university.
Calibrating the test (giving it to many people) returns information like: “50% of test subjects can solve 23 problems of these 50” and “98% of test subjects can solve 41 problems of these 50″.
Then the next step is to put these data in the bell curve, saying: “therefore 23⁄50 means 0 sigma = 100 IQ” and “therefore 41⁄50 means 2 sigma = 130 IQ”.
But you can’t assume that this is linear. To explain it simply, let’s assume that the more intelligent person always solves a superset of the problems the less intelligent person solved. Therefore, any person with IQ between 100 and 130 would solve all the 23 “easy” problems, some of the 18 “hard” problems, and none of the 9 “impossible” problems. But how many exactly—that depends on how difficult exactly those “hard” problems are. Maybe they are relatively easy, and a person with IQ 115 will solve all of them; and maybe they are relatively hard, and a person with IQ 115 will solve none of them. But that is a fact about the test, not about the intelligence distribution of the population. Therefore this fact should be removed in the normalization.
This is NOT forcing the outcome to be a bell curve. This is just normalizing to a given mean and standard deviation, a linear operation that does not change the shape of the distribution.
Consider a hypothetical case where an IQ test consists of 100 questions and 100 people take it. These hundred people all get a different number of questions correct—from 1 to 100: the distribution of the number of correct answers is flat or uniform over [1 .. 100]. Now you normalize the mean to 100 and one standard deviation to 15 -- and yet the distribution remains flat and does not magically become a bell curve.
This is a fact about the test.
Maybe it was wrong for me to use the word “normalization” in this context, but no, the distribution of raw scores is not mapped linearly to the distribution of IQs. It is mapped onto the bell curve.
Otherwise every intelligence test would produce a different intelligence curve, because inventing 100 questions such that they get the same distribution of raw scores as some other set of 100 questions, that would be an impossible task. (Just try to imagine how you would try to obtain the set of 100 questions for which the distribution of raw scores is linear. Keep in mind that every testing on many real subjects costs you a lot of money, and on a few subjects you won’t get statistical significance.)
Could you provide links showing this to be the case?
There is a helpful theorem.
It assumes that all the variables you’re summing are independent.
Weaker forms of CLT hold up even if you relax the independence assumption. See Wikipedia for details.
As a practical matter, in IQ testing even with only linear normalization of raw scores you will get something approximately Gaussian.
I wouldn’t count on that more than about one standard deviation away from the mean.
Not exactly Gaussian—that’s even theoretically impossible because a Gaussian has infinitely long tails—but approximately Gaussian. Bell-shaped, in other words.
Fallacy of grey. Certain approximations are worse than others.
So in this particular example, which approximation is worse than which other approximation and by which metric?
An IQ test in which the scores are only normalized linearly is a worse approximation to a Gaussian distribution than one which is intentionally designed to give Gaussianly distributed scores.
Well, duh, but I don’t see the point.
Perhaps, but it doesn’t follow that the new normalization should be Gaussian. One test I’d like to see is what happens when you give a test calibrated for one population to a different one.
If the test is normalized for a population A, then if we give it to a population B, the results don’t have to be Gaussian. The normalization occurs only once, when the relationship between the raw scores and the IQ values is defined. Later the existing definition can be reused.
You would get somewhat different shape when you a) calibrate the test for population A and then measure population B, or b) calibrate the test for A+B and then measure population B.
Probably the most correct way to compare two populations would be to skip the normalization step and just compare the histograms of raw scores for both populations. (I am not good enough in math to say how exactly.)
Also, I am not sure how much such comparison would depend on the specific test. Let’s imagine that we have one population with average IQ 100 and other population with average IQ 120. If we give them a test consisting of IQ-110-hard questions, the two populations will probably seem more different than if we give them a test consisting of a mix of IQ-80-hard and IQ-140-hard questions.
This backs my general notion that for a lot of measurements (especially of people?), we need graphs, not single numbers.
You can compare by looking at which percentile of population B, the median of population A corresponds to.
Edit: also once you’ve compared several populations this way, you can try to see if there is a way to normalize the test such that the distributions for all the populations have similar shapes.
Oh, yeah. But I think It is probably true that it is difficult to build a model of a continuous trait in which truncation of one tail does not affect the equilibrium of the other tail.
The more relevant point is additive heritability (aka h^2 or narrow sense heritability. Any model will have some, so my condition of having any is not helpful. But if a trait has a lot, that means the trait is pretty close to counting genes, hence the distribution must be a bell curve. But that doesn’t mean that it is a constraint on models.
Not all traits are additively heritable, e.g., the malaria protection/sickle cell anemia gene, and in particular its not obvious that intelligence is additively heritable. One theory I’ve heard is that things like autism are a result of having too many “intelligence genes”.
Even in the most extreme case of dominance, where H^2 greatly diverges from h^2, the additive heritability is not zero. (But if you had a trait in which heterozygotes were distinguishable from homozygotes, but the two types of homozygotes were not distinguishable, then h^2=0. I know of no such trait.)
Here’s a short-term analysis that may be more convincing.
I assume perfect heritability and pm’s choice of 50% selection, both to make the effects larger. I assume additive genetics because that’s what we expect from the assumption of a bell curve. The far right tail is largely produced from two parents both on the right half, even on the tail. The farther right you go, the more true this is. Assuming mating is at random. For each person who could have a right tail child, if only they found the right mate, eliminating half of the population that wouldn’t do doubles their odds of having an appropriate mate and thus a right tail child. Thus, the right tail is twice as big. The further out we go, the closer it is to twice as big. If everyone has twice as many children to make up for the population being cut in half, then the tail is four times as big.
If there is strong assortative mating, the people on the right tail weren’t going to going to have children with the left half and the first effect doesn’t apply, since the selection only eliminates pairings that weren’t going to happen. Indeed, assortative mating is very similar to truncation selection, so combining the two is redundant in the first generation.
In the first generation, the left tail does not look at all gaussian. In the long term, it does become gaussian. In the short term right becomes a thicker tail, but in the long term the variance has gone down and the right tail becomes smaller, starting at two standard deviations from the original mean.
If you did that then after one or two generations, regression to the mean would set the average IQ right back to where it was (almost). If you eliminated enough of the left tail over several generations to actually change the average to a stable higher value, then the right tail would be extended.
Like I said I’m not commenting on the effect of the Holocaust because I don’t know anything about it.
If UberHitler kills everyone with IQ<100, that raises the average IQ without increasing the number of people with high IQ. After a few generations, you are back to a Gaussian with a smaller variance (you lost some genetic diversity) and a slightly larger mean, which means that at some IQ level that is sufficiently high you have fewer people with that IQ .
The reversal test makes this sound a bit strange:
If you have a population with an average IQ of 100 and you add in an equal number of people with an IQ of 80 then after a generation, you will have a Gaussian with a larger variance. Hence there will be more geniuses due to more genetic variation.
Surely you don’t believe that? I realize that this isn’t a perfect reversal but that sounds very odd to me.
Anyway here is the crude model of intelligence that I working with—I admit I’m not an expert on this topic, and I have some reading up to do on the genetic basis of intelligence. Intelligence is a polygenetic trait that can be roughly (very roughly) modeled as a bunch of genetic sites with either a plus or minus alleles (keeping it simple with just 2 possibilities). The more plus alleles you have the more likely you are to have a high IQ (genes and intelligence aren’t perfectly correlated). Populations with a higher average IQ have a higher concentration of plus alleles so the chance of receiving many of them is increased. But if you take away all of the people who due to bad luck received a very large number of minus alleles, you haven’t altered the concentration of alleles in the gene pool that much—this is part of why regression to the mean occurs. But if you consistently select for people with a higher concentration of plus alleles, then the odds of any one child having a lot of plus alleles increases in the population. This is how artificial selection occurs in any trait that is polygenetic. Corn kernels are huge because the people who cultivated corn selected for the biggest corn kernels—yes there was a loss of genetic diversity and yes there was decrease in the variance, but that nevertheless what was observed were corn kernels that were bigger than any corn before.
It would happen in your model, if there is no perfect overlap between the set of sites in one population and the set of sites in the other population. With two populations, you have more sites. The smartest possible mega-genius is from the mixed population and has + alleles on each site; none of the original populations can have a genius this smart at all.
To see that on less extreme rarity (and approximately for a large number of alleles), write down the ratio of two Gaussians with different means and variances. Simplify. Observe that the ratio of the larger variance Gaussian to the smaller variance Gaussian gets arbitrarily high far from the mean.
Okay but that is an incredibly weak claim—I’m not interested in switching all of the plus alleles on because additivity starts to break down and having an IQ of say 500 isn’t particularly meaningful. For any reasonable definition of genius, artificially selecting for the smartest members of a population (what super-Hitler is doing), will increase the number of them.
Assume total heritability, random mating, additive genetics, and a single 50% truncation event. In the first generation, the right tail becomes 4x larger as a proportion of the population, but it gets smaller in equilibrium. The new mean is 0.8 standard deviations above the old mean. The new standard deviation is 0.6 times the old one. When it reaches equilibrium and becomes a Gaussian with those parameters, the crossover where the old population had a thicker tail than the new is about two standard deviations. At three standard deviations, the new distribution is only 1⁄10 of the old distribution. But I don’t know how much time it takes to get there.
Thank you, I’m pretty surprised by that result. Two questions: does assortive mating merely slow down that process? And is there any way to increase the both the average and the standard deviation?
You need new mutations to increase the standard deviation, that takes a lot of time and a big population size.
Also, having a genetic disorder applies larger selection pressure to the other genes.
If we are to think of some real ‘eugenic’ population bottleneck, such as WW2 related, the correlation between intelligence and survival is, frankly, shit, plus a lot of small, geographically co-located sub-populations where a bunch of beneficial genes have been slowly increasing in prevalence get completely wiped out, with loss of all copies of that gene.
Bottom line is, selective breeding of larger corn kernels works quickly because the nature hasn’t been breeding for larger corn kernels to begin with, it has been breeding optimum kernel sizes, and to get large kernels you’re just selecting genetic disorders. There’s nothing that you can wreck about the brain that would turn you into a genius, there’s a plenty of things you can wreck about growth that would make corn kernels big.
Or just some mutagens.
It seems to me that this would work much better for traits that can be accomplished through loss of function (e.g. larger corn kernels, through loss of function of regulator genes) than in general. At too high mutation rate, complex functionality can’t be preserved.
One thing to keep in mind eugenics wise is that pretty much all the breeding methods we employ for other species are dysgenic—we are producing cripples to our own benefit or amusement. Damage this, damage that, select this bad gene, that bad gene, and you get yourself docile floppy eared dog with the IQ equivalent of severe mental retardation, compared to a wolf.
I assume by ‘dysgenic’ you mean ‘less fit than unbred specimens for reproductive fitness in the wild’. (You couldn’t mean ‘reproductive fitness’ in general, given how many dogs there are compared to how many wolves there are now.)
This seems like an odd point to make. Of course we breed animals to be less-reproductively-fit-in-the-wild—if they were already ideal for our multifarous purposes, why would we be explicitly breeding them at all? (If they were already ideal for eating or being pets or whatever, we would simply capture & use them or raise them normally without any interference in their reproduction.)
It’d be a pointless point if there was a symmetry between fitness in the wild and fitness for our purpose. There isn’t—fitness in the wild is very seldom improved by loss-of-function mutations, whereas fitness for our purposes, starting from the species that have been evolving for fitness in the wild, very often is. Rapid success at breeding larger corn kernels is not going to generalize into rapid success at breeding ubermensch.
There’s no reason evolution would have already have optimized for all the intelligence-related alleles; if it had, they would have reached fixation.
I think it is. All the genetic data seems to point to: much of intelligence is genetic, highly polygenic, not fixated, and additive. All of that translates to breedability: we have a lot of easily identified variants present in only parts of the population; hence, breedable.
There’s no question that evolution can continue. The issue is that the rate you can attain for different traits differ. For example, evolving smaller animals from larger animals (by a given factor) is an order of magnitude faster process than evolving larger animals from smaller animals. ( http://news.ucsc.edu/2012/01/body-size.html ). I think you wouldn’t disagree that it would be far quicker to breed a 50 point IQ drop than 50 point IQ rise?
I guess you refer to those studies on intelligence genes which flood the popular media, which tend to have small effect sizes and are of exactly the kind that is very prone to superfluous results.
But what does that have to do with breeding for our objective purpose? It may be easier to destroy functionality than create it, but evolution is creating functionality for living in the wild and doing something like hunting mice while we’re interesting in creating functionality to do something like understand human social cues and trading off against things like aggression and hostility towards the unknown. In both cases, functionality is being created and trading off against something else, and there’s no reason to expect the change for one case to be beneficial for the other. Border collies may be geniuses at memorizing words and herding sheep and both of these feats required intense selection, but both skills are worse than useless for surviving in the wild as a wolf...
The original studies, yes, the ones like candidate-gene studies where n rarely is more than a few hundred, but the ones using proper sample sizes like n>50000 and genome-wide significance level seem trustworthy to me. They seem to be replicating.
Well, my point was that you can’t expect the same rate of advances from some IQ breeding programme that we get when breeding traits arising via loss-of-function mutations.
They don’t seem to be replicating very well...
http://arstechnica.com/science/2014/09/researchers-search-for-genes-behind-intelligence-find-almost-nothing/
Sure, there’s a huge genetic component, but almost none of it is “easily identified”.
Generally you can expect that parameters such as e.g. initial receptor density at a specific kind of synapse would be influenced by multiple genes and have an optimum, where either higher or lower value is sub-optimal. So you can easily get one of the shapes from the bottom row in
http://en.wikipedia.org/wiki/Correlation_and_dependence#/media/File:Correlation_examples2.svg
i.e. little or no correlation between IQ and that parameter (and little or no correlation between IQ and any one of the many genes influencing said parameter).
edit: that is to say, for example if we have an allele which slightly increases number of receptors on a synapse between some neuron type A and some neuron type B, that can either increase or decrease the intelligence depending on whenever the activation of Bs by As would be too low or too high otherwise (as determined by all the other genes). So this allele affects intelligence, sure, but not in a simple easy to detect way.
I am not sure this is generally true.
The wild equivalent to “fitness for our purposes” is a drastic change in the environment which starts to select for different criteria. In such conditions organisms certainly select for new-useful-function mutations, but they also select for loss-of-no-longer-useful-function mutations. Functionality tends to be expensive (e.g. in energy) and if you don’t need it, you’re better off discarding it.
Remnants of lost functionality are common.
Those drastic changes rarely happen, though. In humans, the most recent very well known one was adult lactose tolerance—something that switched lactase off in adulthood no longer does.
edit: and somewhat back to the original point with regards to eugenics—humans have been evolving intelligence for a while already, so selection for intelligence doesn’t seem like a dramatic change.
That, by the way, is an interesting example of both adding functionality (now adults can drink milk!) and losing functionality (the gene which turns off lactase production in adulthood got broken and no longer works in many people).
Yeah. Anyhow, my original point has to do with attempts to breed humans for intelligence. Humans have been evolving for greater intelligence for a very long time now, any free easy gains already been made. You could probably get larger brain volume rather easily with birth by caesarian only, but that doesn’t seem like a good idea to me.
I don’t know. It may or may not be true, but it doesn’t look obvious to me.
The issue is that “evolving for greater intelligence” competes with other things like “evolving for greater strength” or “evolving for greater alpha-ness” or maybe even simply “evolving to survive famines”.
Because of TANSTAAFL greater intelligence comes at a cost (as a trivial example, the human brain consumes a LOT of energy) and the trade-offs the evolution makes are appropriate for the then-current environment. And our current environment is markedly different (there’s your drastic change) from the one in which modern humans actually evolved.
It is quite possible that some trade-offs which held down the growth of intelligence are no longer operational and humans can/will continue to evolve towards even higher IQ.
Practically, of course, the point is moot as evolution is very very slow and humans will self-modify much more rapidly than evolution could provide any noticeable gains.
Maybe, but as you say, it would come at potential cost. E.g. gain of a few points but you won’t survive famine, that doesn’t sound very good.
Or much more insidiously, gains on an IQ test, at the expense of ability to form/organize/use complex background knowledge (IQ tests are designed to be minimally affected by extra background knowledge).
Yeah, either that, or the civilization goes kaput and it’s back to all-natural selection.
Think of this in terms of complexity (use your favorite measure). The point is that evolution has a much easier time reducing it than increasing it.
Many breeds of dogs are certainly very dim compared to wolves, but I’m not so sure that some aren’t just as intelligent, perhaps more so. It can be difficult to evaluate the relative intelligence of dogs and wolves, because some of the hallmarks by which we measure the most intelligent dogs (such as the complexity of tasks they can be trained to perform) do not apply to wolves because they’re so much less cooperative.
Considering the intellectual tasks the smarter breeds of dogs are capable of though, I wouldn’t rule out the possibility of eugenic selection for intelligence relative to wolves, for e.g. border collies, standard poodles and such.
Wolves are under strong selection pressure as well, though.
Intelligence comparisons are of course tricky, but one could compare brain volumes as a proxy, and the comparison is not in favor of dogs.
Thing is, of possible mutations within any gene (coding for a protein), vast majority cause loss of it’s original function. This makes the speed of evolution dramatically dependent to the specific details of how the change is accomplished.
Brain volume isn’t necessarily a very good proxy, some animals are significantly smarter than other animals which have larger brains. Rats, for instance, may be more intelligent than some animals which are capable of eating rats, and have much larger brains due to greater body volume.
The vast majority of the difference between dogs and wolves isn’t due to mutation, but selective concentration of genes which already existed within the grey wolf gene pool.
As far as I remember, dogs are NOT domesticated wolves. Dogs and wolves have a common ancestor, but they diverged quite a while ago, possibly even before domestication. I vaguely recall that jackals were also somehow involved in dog ancestry.
The common ancestor of dogs and gray wolves, while perhaps having some differences with modern wolves, was still a gray wolf, and this is supported by the paper you linked below. While it’s true that modern gray wolves have less diversity than ancestral ones, what Desrtopa said is also correct.
I think this is incorrect, the most recent source I’ve read on the subject indicated that nearly the entire gene diversity out of all breeds of dogs is just a subset of the gene diversity that already existed in grey wolves.
WIkipedia also supports the contention that dogs are extracted directly from grey wolves a few tens of thousands of years ago, too recently for them to have diverged from some meaningfully distinct common ancestor.
The success in developing tame silver foxes with only a few generations of selective breeding suggests that domestic traits can be bred into canines without additional mutation just by imposing selection effects to sort for genes already existing within their population.
This claims otherwise. Notably:
To identify genetic changes underlying dog domestication and reconstruct their early evolutionary history, we generated high-quality genome sequences from three gray wolves, one from each of the three putative centers of dog domestication, two basal dog lineages (Basenji and Dingo) and a golden jackal as an outgroup. Analysis of these sequences supports a demographic model in which dogs and wolves diverged through a dynamic process involving population bottlenecks in both lineages and post-divergence gene flow. In dogs, the domestication bottleneck involved at least a 16-fold reduction in population size, a much more severe bottleneck than estimated previously. A sharp bottleneck in wolves occurred soon after their divergence from dogs, implying that the pool of diversity from which dogs arose was substantially larger than represented by modern wolf populations. We narrow the plausible range for the date of initial dog domestication to an interval spanning 11–16 thousand years ago, predating the rise of agriculture.
If you truncate less of the tail, it takes more generations to move the mean, but I believe that by the time it moves the same distance, the variance shrinks less.
If you have a randomly mating population, apply assortative mating for a few generations, apply one generation of selection, and let randomly mix, it costs less variance for the same mean as if you don’t do assortative mating. That’s because assortative mating is a kind of selection, so this is like several generations of selection. If you start and end with an equilibrium of assortative mating, I’m not sure what happens. Also, assortative mating increases the variance, so you have to distinguish between the variance of the population and the variance of the population that would result if you switched to random mating.
test
I made a weak claim (all sites) to make it easier for you to see how that works within your own additive model. Of course, you don’t have to have plus alleles on all locations for a genius to be more common in the mixed population than in the original populations.
This would depend on the population sizes involved, number of locations, and overlap between locations.
I am not following how killing people who do poorly on a test does not evoke the evolution demon, eventually.
The average increased, that’s your evolution. If you let many generations pass, for the mutations to happen and genetic diversity to restore, you will get the variance back as well.
Assuming random mating, you’ll already get higher IQ kids in the next generation since people with exceptionally high IQ are more likely to mate.
What is the process by which you expect the mean to regress enough to leave you with a thinner upper tail than before UberHitler did his thing?