Hangman as analogy for Natural Selection
Hi guys,
I was trying to come up with a helpful analogy to help explain natural selection in simple terms and it occurred to me that the game Hangman might make a useful analogy, albeit an imperfect and simplified one. I’d be interested to hear your thoughts on this and any other useful analogies or strategies for explaining in simple terms how natural selection allows complexity to arise from simplicity and how it is distinct from random chance.
The Hangman analogy I propose would read as follows:
A long word is chosen, say with a dozen letters, and a dozen blanks are drawn on the paper. Person A then guesses a letter. If the letter is present in the word a blank is filled in and the player can try another letter and so on. Their further guesses will be informed by the letters they have already discovered rather than being completely random. If the letter is not present the player loses a life (represented by the drawing of part of the gallows). If they run out of lives the game is over and a new player, Person B takes their place. Person B must start from the beginning.
In this analogy the long word is a complex adaption, requiring many seperate chance mutations to build it. Each guessed letter is a chance mutation that can be beneficial (correct answers bring you closer) or detrimental (wrong ones cost you lives). The loss of all lives represents the extinction of the species, meaning no further mutations can occur. Person B is an entirely different species that can’t “compare notes” with Person A and hence must start from the beginning (though they may take a different route).
The benefit of this analogy is it’s an example of random guesses still having a sense of forward progression (discovered letters are not removed, and gradually build up), and that it refers to a simple game I think most people will be familiar with. You could then go on to explain how a complex adaption takes many more than a dozen steps, that there are many more than 24 possible mutations, and that each guess takes many generations, to give a sense of the timescales involved.
The weaknesses are considerable and include the inability to go backwards (beneficial changes can be lost as well as gained) and the existence of a single specific end goal (the unknown word), rather than this being a continual process without set targets. It also ignores the possibility that a beneficial mutation does not spread throughout the species.
I very much doubt this is an original suggestion, but it seemed a handy simplification of the “password-guessing” analogy I was just reading about in Dawkins’ “The Blind Watchmaker”. Any comments or alternative methods would be welcome (I’m still not very widely read on the subject of evolution so I’m sure others have put it more clearly than I could).
Thanks for your time.
David
My concern is that this analogy makes it sound too much as though evolution has a preset goal that the traits are aiming at (which is an error that’s already pretty pervasive).
So I would either explicitly address that problem when you’re pulling out this analogy, or tweak it to make it sound less telos-y. (Maybe you’re playing hangman in French, or some other language you’re not fluent in and you’re just trying to make any real word of that length? But the more letters you lock in, the fewer possible answers are available?)
I only have a layperson’s knowledge of evolutionary biology, so my criticisms might miss some important subtlety, but it seems to that your analogy is significantly misleading in a couple of ways. It does convey the idea that random guesses with incremental feedback is a better search strategy than if the feedback were holistic (e.g. if you were guessing whole words and the only feedback were whether the guess is correct or not). In so far as someone’s worry about natural selection is that they’re mistaking it for the latter sort of search, the analogy may be helpful. But if you want to convey something more specific about how natural selection works, then I’m afraid the analogy isn’t all that great.
One drawback of the analogy is in the nature of the environmental feedback. In Hangman, a letter gets fixed if (and only if) it is part of the correct answer. In genuine natural selection, though, a mutation doesn’t get fixed because it is part of a complex set of mutations that collectively confer some phenotypic benefit. The environment isn’t forward-looking like that; it doesn’t say “This mutation is part of what is needed for optimality, so I’m going to hold onto it for that reason.” Each individual mutation, in order to get fixed in the population, must confer some immediate reproductive benefit. Merely being one element of some complex group of mutations that is collectively beneficial is insufficient. The hangman analogy doesn’t capture this aspect of natural selection.
This actually leads the analogy to kind of play into the hands of “irreducible complexity” critiques of natural selection. The proponents of such critiques presume that the individual parts of some complex adaptation only benefit the organism to the extent that they are part of that complex adaptation, and hence one cannot explain their selection without supposing that there is some forward-looking element to selection which holds onto those individual changes just because they will eventually contribute to a complex adaptation. This forward-looking aspect is then offered as evidence of intelligent design.
Another big drawback is that the analogy doesn’t capture the competitive nature of natural selection. Natural selection occurs in populations, and requires both variation in traits among individuals in the population and competition for resources among those individuals. The Hangman analogy suggests that the environment already has a fixed template for the ideal phenotype and that it punishes organisms (or genes) individually for failing to approach this ideal and rewards them for getting closer to the ideal. If you have a population, and things worked in the Hangman way, there would be no correlation between rewards and punishments. But that’s not how natural selection works. Genes are rewarded for contributing to their vehicles (organisms) being more reproductively successful than other organisms in the population. A reward just consists in reproducing more than your competitors, and a punishment just consists in reproducing less, so rewards and punishments are correlated. One allele can’t get rewarded without another one getting punished.
The ‘irreducible complexity’ argument advocated by the intelligent design community often cites the specific example of the eye. It is argued that an eye is a complex organ with many different individual parts that all must work together perfectly and that this implies it could not have been gradually built out of small gradual random changes.
This argument has been around a long time but it has been well answered within the scientific literature and the vast majority of biologist consider the issue settled.
Dawkins’ book ‘Climbing mount improbable’ provides a summary of the science for the lay reader and uses the eye as a detailed example.
Darwin was the first to explain how the the eye could have evolved via natural selection. I quote the wikipedia article:
The argument of ‘irreducible complexity’ has been around since Darwin first proposed natural selection and it has been conclusively answered within the scientific literature (for a good summary see the Wikipedia article). Those who believe that all life was created by God cannot believe the scientific explanation. In my view the real problem is that they tend to argue that they have superior scientific evidence which proves that the scientific consensus is wrong. In other words the intelligent design community argues they are scientifically superior to the science community. This reduces their position to a undignified one of deception or perhaps even fraud.
Wait, did you interpret my comment as supporting the “irreducible complexity” argument? My whole point was that it is a bad argument. I was criticizing the Hangman analogy because it seems to invite the same sort of mistake that the “irreducible complexity” people make.
Yes on re-reading I see what you are saying.
Thanks for the feedback. I think you’re right that a key omission here is failing to note that each step must be useful in itself, and provide a non-negligable boost to chances of survival on its own. It also implies a greater sense of purpose than exists in nature (there’s no mind aiming for things, just more resilient creatures surviving).
I realise the model has many flaws and omits wider context such as competition, but I’m still tempted by the appeal of using such a common situation as the analogy. Talk of guessing passwords or rolling dice does make excellent analogies, but if you want to engage someone it helps to talk about something closer to their personal experience, and I imagine most people played hangman on a board or margin at some point at school.
On a similar subject, the boardgame Guess Who is a perfect illustration of the point in Burdensome Details. Each additional claim about Person X (do they wear glasses? are they blond?) leads you to knock down some possibilities.
I was also inspired by one of Dawkins’ books suggesting something similar. It was some years ago but I believe Dawkins suggested writing a type of computer script which would mimic natural selection. I wrote a script and was quite surprised at the power it demonstrated.
As I remember the general idea is that you can type in any string of characters you like and then click the ‘evolve’ button. The computer program then:
1) generates and displays a string of random characters of the same length as the entered string.
2) compares the new string with the displayed string and retains all characters that are the same and in the same position.
3) generates random characters in the string where they did not match in 2 and displays the full string.
4) If the string in 3 matches the string entered by the computer the program stops otherwise it goes to step 2.
The rapidity with which this program converges on the one entered it quite surprising.
This simulation is somewhat different from natural selection especially in that the selection rules are hard coded but I think it does demonstrate the power of random changes to converge when there is strong selection pressure.
A fascinating aid in demonstrating natural selection was built by Darwin’s cousin Francis Galton in 1877. A illustration and description can be found here. The amazing thing about this device is that, as described in the article, it has been re-discovered and re-purpose to illustrate the process of Bayesian inference.
I have come to consider this isomorphism between Bayesian inference and natural selection or Darwinian processes in general as a deep insight into the workings of nature. I view natural selection as a method of physically performing Bayesian inference, specifically as a method for inferring means for reproductive success. My paper on this subject may be found here
That’s a whole class of optimizers. See e.g. here.
You might like this. (“In fact, I realized, Bayes’s rule just is the discrete-time replicator equation, with different hypotheses being so many different replicators, and the fitness function being the conditional likelihood. ”)
Yes, thanks, and the standard mathematical description of the change in frequency of alleles over generations is given in the form of a Bayesian update where the likelihood is the ratio of reproductive fitness of the particular allele to the average reproductive fitness of all competing alleles at that locus.
Related: rolling many dice and getting all sixes
http://www.creationtheory.org/Probability/Page03.xhtml