Actually, that’s based on the mistaken belief that selection can provide only 1 bit of information per generation. If you’ll look down to the end of the original 2007 post, you’ll see I gave the correct (and now Eliezer-approved) formulation, which is:
If you take a population of organisms, and you divide it arbitrarily into 2 groups, and you show the 2 groups to God and ask, “Which one of these groups is, on average, more fit?”, and God tells you, then you have been given 1 bit of information.
But if you take a population of organisms, and ask God to divide it into 2 groups, one consisting of organisms of above-average fitness, and one consisting of organisms of below-average fitness, that gives you a lot more than 1 bit. It takes n lg(n) bits to sort the population; then you subtract out the information needed to sort each half, so you gain n lg(n) − 2(n/2)lg(n/2) = n[lg(n) - lg(n/2)]
= nlg(2) = n bits.
If you do tournament selection, you have n/2 tournaments, each of which gives you 1 bit, so you get n/2 bits per generation.
ADDED: This doesn’t immediately get you out of the problem, as n bits spread out among n genomes gives you 1 bit per genome. That doesn’t mean, though, that you’ve gained only 1 bit for the species as a whole. The more-important observation in that summary is that organisms with more mutations are more likely to die, eliminating > 1 mutation per death on average.
Although the actual Genome Project’s finding of 25,000 genes fits well under Yudkowsky’s attempted bound, the mathematical argument failed. A computer simulation failed to bear out the bound, and the flaw appears to have been as follows: Even if one mutation creates one death, this does not mean that one death eliminates only a single mutation. Organisms bearing more deleterious mutations are more likely to lose the evolutionary competition, and so each death can eliminate more mutations than average. If mating is random and the least fit organisms are perfectly eliminated in every generation, the information supportable in the genome goes as the inverse square of the mutation rate.
That’s not exactly Eliezer-approved, because now the real problem is to tell what the conditions are more like in nature—Worden or MacKay or somewhere in between. That’s what I put up on the Wiki as summary of the state of information. Mathematical assumptions are cheaper than empirical truths.
If this is a discussion of Worden’s paper, then you seem to have missed that he is not talking about information, but rather “Genetic Information in the Phenotype”—which is actually a completely different concept.
“GIP is a measure of how much the observed values i in a large population tend to cluster on a few values; if there is no clustering, Gµ=0, and if there is complete clustering on one value, Gµ= log2(Nµ). It is a property of the population, not of an individual.”
May be relevant, and seems to be consistent with your point: evolution has a speed limit and complexity bound.
Actually, that’s based on the mistaken belief that selection can provide only 1 bit of information per generation. If you’ll look down to the end of the original 2007 post, you’ll see I gave the correct (and now Eliezer-approved) formulation, which is:
If you take a population of organisms, and you divide it arbitrarily into 2 groups, and you show the 2 groups to God and ask, “Which one of these groups is, on average, more fit?”, and God tells you, then you have been given 1 bit of information.
But if you take a population of organisms, and ask God to divide it into 2 groups, one consisting of organisms of above-average fitness, and one consisting of organisms of below-average fitness, that gives you a lot more than 1 bit. It takes n lg(n) bits to sort the population; then you subtract out the information needed to sort each half, so you gain n lg(n) − 2(n/2)lg(n/2) = n[lg(n) - lg(n/2)] = nlg(2) = n bits.
If you do tournament selection, you have n/2 tournaments, each of which gives you 1 bit, so you get n/2 bits per generation.
ADDED: This doesn’t immediately get you out of the problem, as n bits spread out among n genomes gives you 1 bit per genome. That doesn’t mean, though, that you’ve gained only 1 bit for the species as a whole. The more-important observation in that summary is that organisms with more mutations are more likely to die, eliminating > 1 mutation per death on average.
This paragraph is more important:
That’s not exactly Eliezer-approved, because now the real problem is to tell what the conditions are more like in nature—Worden or MacKay or somewhere in between. That’s what I put up on the Wiki as summary of the state of information. Mathematical assumptions are cheaper than empirical truths.
Right—I just meant wrt “1 bit per generation regardless of population size”.
If this is a discussion of Worden’s paper, then you seem to have missed that he is not talking about information, but rather “Genetic Information in the Phenotype”—which is actually a completely different concept.
How so?
For instance:
“GIP is a measure of how much the observed values i in a large population tend to cluster on a few values; if there is no clustering, Gµ=0, and if there is complete clustering on one value, Gµ= log2(Nµ). It is a property of the population, not of an individual.”
http://dspace.dial.pipex.com/jcollie/sle/
Ah. Sorry for not reading through the history, and thanks for the good explanation!
Worden? Essentially that’s a crock. See:
http://alife.co.uk/essays/no_speed_limit_for_evolution/