Conjuring An Evolution To Serve You
GreyThumb.blog offers an interesting analogue between research on animal breeding and the fall of Enron. Before 1995, the way animal breeding worked was that you would take the top individual performers in each generation and breed from them, or their parents. A cockerel doesn’t lay eggs, so you have to observe daughter hens to determine which cockerels to breed. Sounds logical, right? If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs.
Behold the awesome power of making evolution work for you! The power that made butterflies—now constrained to your own purposes! And it worked, too. Per-cow milk output in the US doubled between 1905 and 1965, and has doubled again since then.
Yet conjuring Azathoth oft has unintended consequences, as some researchers realized in the 1990s. In the real world, sometimes you have more than animal per farm. You see the problem, right? If you don’t, you should probably think twice before trying to conjure an evolution to serve you—magic is not for the unparanoid.
Selecting the hen who lays the most eggs doesn’t necessarily get you the most efficient egg-laying metabolism. It may get you the most dominant hen, that pecked its way to the top of the pecking order at the expense of other hens. Individual selection doesn’t necessarily work to the benefit of the group, but a farm’s productivity is determined by group outputs.
Indeed, for some strange reason, the individual breeding programs which had been so successful at increasing egg production now required hens to have their beaks clipped, or be housed in individual cages, or they would peck each other to death.
While the conditions for group selection are only rarely right in Nature, one can readily impose genuine group selection in the laboratory. After only 6 generations of artificially imposed group selection—breeding from the hens in the best groups, rather than the best individual hens—average days of survival increased from 160 to 348, and egg mass per bird increased from 5.3 to 13.3 kg. At 58 weeks of age, the selected line had 20% mortality compared to the control group at 54%. A commercial line of hens, allowed to grow up with unclipped beaks, had 89% mortality at 58 weeks.
And the fall of Enron? Jeff Skilling fancied himself an evolution-conjurer, it seems. (Not that he, like, knew any evolutionary math or anything.) Every year, every Enron employee’s performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses. Unfortunately, as GreyThumb points out:
“Everyone knows that there are many things you can do in any corporate environment to give the appearance and impression of being productive. Enron’s corporate environment was particularly conducive to this: its principal business was energy trading, and it had large densely populated trading floors peopled by high-powered traders that would sit and play the markets all day. There were, I’m sure, many things that a trader could do to up his performance numbers, either by cheating or by gaming the system. This gaming of the system probably included gaming his fellow traders, many of whom were close enough to rub elbows with.
“So Enron was applying selection at the individual level according to metrics like individual trading performance to a group system whose performance was, like the henhouses, an emergent property of group dynamics as well as a result of individual fitness. The result was more or less the same. Instead of increasing overall productivity, they got mean chickens and actual productivity declined. They were selecting for traits like aggressiveness, sociopathic tendencies, and dishonesty.”
And the moral of the story is: Be careful when you set forth to conjure the blind idiot god. People look at a pretty butterfly (note selectivity) and think: “Evolution designed them—how pretty—I should get evolution to do things for me, too!” But this is qualitative reasoning, as if evolution were either present or absent. Applying 10% selection for 10 generations is not going to get you the same amount of cumulative selection pressure as 3.85 billion years of natural selection.
I have previously emphasized that the evolution-of-foxes works at cross-purposes to the evolution-of-rabbits; there is no unitary Evolution God to praise for every beauty of Nature. Azathoth has ten million hands. When you conjure, you don’t get the evolution, the Maker of Butterflies. You get an evolution, with characteristics and strength that depend on your exact conjuration. If you just take everything you see in Nature and attribute it to “evolution”, you’ll start thinking that some cute little conjuration which runs for 20 generations will get you artifacts on the order of butterflies. Try 3.85 billion years.
Same caveat with the wonders of simulated evolution on computers, producing a radio antenna better than a human design, etcetera. These are sometimes human-competitive (more often not) when it comes to optimizing a continuous design over 57 performance criteria, or breeding a design with 57 elements. Anything beyond that, and modern evolutionary algorithms are defeated by the same exponential explosion that consumes the rest of AI. Yes, evolutionary algorithms have a legitimate place in AI. Consult a machine-learning expert, who knows when to use them and when not to. Even biologically inspired genetic algorithms with sexual mixing, rarely perform better than beam searches and other non-biologically-inspired techniques on the same problem.
And for this weakness, let us all be thankful. If the blind idiot god did not take a million years in which to do anything complicated, It would be bloody scary. 3.85 billion years of natural selection produced molecular nanotechnology (cells) and Artificial General Intelligence (brains), which even we humans aren’t going to get for a few more decades. If there were an alien demideity, morality-and-aesthetics-free, often blindly suicidal, capable of wielding nanotech and AGI in real time, I’d put aside all other concerns and figure out how to kill it. Assuming that I hadn’t already been enslaved beyond all desire of escape. Look at the trouble we’re having with bacteria, which go through generations fast enough that their evolutions are learning to evade our antibiotics after only a few decades’ respite.
You really don’t want to conjure Azathoth at full power. You really, really don’t. You’ll get more than pretty butterflies.
- The Hidden Complexity of Wishes by 24 Nov 2007 0:12 UTC; 176 points) (
- Where do (did?) stable, cooperative institutions come from? by 3 Nov 2020 22:14 UTC; 150 points) (
- The Standard Analogy by 3 Jun 2024 17:15 UTC; 118 points) (
- Behaviorism: Beware Anthropomorphizing Humans by 4 Jul 2011 20:40 UTC; 89 points) (
- A model I use when making plans to reduce AI x-risk by 19 Jan 2018 0:21 UTC; 69 points) (
- Fake Fake Utility Functions by 6 Dec 2007 6:30 UTC; 42 points) (
- Selecting Rationalist Groups by 2 Apr 2009 16:21 UTC; 42 points) (
- Selective processes bring tag-alongs (but not always!) by 11 Mar 2009 8:17 UTC; 39 points) (
- 11 Nov 2011 19:46 UTC; 25 points) 's comment on Transhumanism and Gender Relations by (
- 9 Jan 2012 6:19 UTC; 20 points) 's comment on Q&A with experts on risks from AI #1 by (
- I think I’ve found the source of what’s been bugging me about “Friendly AI” by 10 Jun 2012 14:06 UTC; 15 points) (
- 11 Nov 2018 0:48 UTC; 10 points) 's comment on Real-time hiring with prediction markets by (
- 27 Feb 2023 17:53 UTC; 10 points) 's comment on Eliezer is still ridiculously optimistic about AI risk by (
- [SEQ RERUN] Conjuring an Evolution to Serve You by 31 Oct 2011 1:18 UTC; 10 points) (
- 13 Mar 2010 19:09 UTC; 6 points) 's comment on The Importance of Goodhart’s Law by (
- 2 Nov 2010 23:18 UTC; 6 points) 's comment on Levels of Intelligence by (
- 7 May 2013 18:22 UTC; 3 points) 's comment on Using Evolution for Marriage or Sex by (
- 10 Feb 2016 5:01 UTC; 2 points) 's comment on Open Thread, Feb 8 - Feb 15, 2016 by (
- 22 Aug 2013 2:35 UTC; 1 point) 's comment on Engaging First Introductions to AI Risk by (
- 9 Jan 2023 12:06 UTC; 1 point) 's comment on Richard_Kennaway’s Shortform by (
- 27 Feb 2023 17:30 UTC; 0 points) 's comment on Eliezer is still ridiculously optimistic about AI risk by (
- 10 May 2014 9:49 UTC; 0 points) 's comment on Let’s reimplement EURISKO! by (
- 21 Jan 2016 6:05 UTC; 0 points) 's comment on Open thread, Jan. 18 - Jan. 24, 2016 by (
- 25 Aug 2015 12:30 UTC; -5 points) 's comment on Open Thread—Aug 24 - Aug 30 by (
- 25 Feb 2016 3:12 UTC; -7 points) 's comment on Open Thread Feb 22 - Feb 28, 2016 by (
There are lots of examples of unexpected selective outcomes.
A story—a long time agon a swedish researcher tried to increase wheat yields by picking the biggest wheat kernels to plant. In only 5 generations he had a strain of wheat that produced 6 giant wheat kernels per stalk.
When scale insects were damaging citrus fruits, farmers tried to poison them with cyanide. They’d put a giant tent over the whole tree and pump in the cyanide and kill the scale insects. Plants can be immune to cyanide but no animal that depends on respiration can be. And yet in only 5 years or so they got resistant scale insects. The resistant insects would—when anything startling happen—sit very still and hold their breath for half an hour or so.
If you want to do directed evolution, you do better to do it in controlled conditions. Take your results and test them carefully and make sure they’re what you want before you release them. Microbiologists who want mutants for research commonly take 20 or 100 mutants who survive the conditions they’re selected to survive, and test until they get a few that appear to be just what they want. Eliminate the rest.
So, for example, to find a mutant that has a high mutation rate—start with a strain of bacteria that has at least 4 selectable traits. Say, they don’t survive without threonine, don’t survive without isoleucine/valine, don’t survive penicillin, and don’t survive rifampicin. So you grow up a hundred billion or so of them and then you centrifuge them down and resuspend them in medium that doesn’t have threonine. Most of them die. Wait for the survivors to grow, and then centrifuge them down and resuspend them in medium that doesn’t have isoleucine/valine. Most of them die. Wait for the survivors to grow, and centrifuge them down and resuspend them in medium that has penicillin. Do it a fourth time with rifampicin. Plate them out on media that has lactose (when the originals couldn’t use lactose). Some of the colonies will be large and some small, pick a colony that has lots of little warts of bigger growth, because it gets lactose-using mutants even while the colony is growing. A strain that has a hundred times the mutation rate can be easily selected this way. It started out at frequency around 10^-8. After the first selection cycle it was frequency around 10^-6. By the fourth round it was common. Sometimes you can get a mutation rate around 1000 times the normal rate. Much above that and it doesn’t survive well.
Take one colony per try because you don’t want to test multiple colonies and then find out they’re the same mutation over again.
The examples seem to demonstrate the weaknesses of selective breeding rather than evolution. Human intent and imperfect knowledge appear to be poor substitutes for the blind, mindless processes of nature.
Hmmmm...
It’s worth remembering that the chicken experiment was specifically designed to elicit that effect, and chickens are unusual in being confined to extremely small cages with other chickens. That doesn’t happen with cows or apples or wheat or… As far as I know, animal/plant breeders typically totally ignore such indirect genetic effects/group-level effects (or even model them away, absorbing them into fixed/random effects), along with ignoring apparently vital stuff like epistasis/dominance, and yet the dumb simple selection methods based on additivity work fine and still realize all the improvements they are supposed to. Yields go up reliably every year.
Eliezer_Yudkowsky: I’m not sure I see the relevance of evolutionary theory to Enron. According to the characterization you quoted, the problem was that the stakes were so high that people cheated. Why do evolution’s insights help me see that? That mishap can be explained through poor incentive alignment: what was optimal behavior for a trader was not regarded by Enron as optimal behavior. The disutility to Enron of “false profits” was not reflected in an individual trader’s utility curve.
So Skilling picked a bad incentive structure. Does everyone who picks a bad incentive structure fancy himself an evolution conjurer?
If one thinks of evolution as the process of deriving “better” results through a selection criteria and a change process, then yes, Skilling was conjuring evolution, though he did not realize it. He established a selection criteria (individual performance numbers) and the employees themselves provided the change process. As he repeatedly selected against the weakest performers (according to his insufficiently rational criteria), the employees changed through what they found the easiest way to achieve “better performance”. The company evolved as the employees changed their behavior.
Skilling was selecting badly. The 10% he discarded each year might have included some he should have kept, and vice versa.
Similarly, God at one point said he was going to get rid of evil people and keep good people and so people would get better. I don’t see much evidence that’s worked well.
Evolution happens, but if you want to harness it for your own goals you have to be very careful. Try to arrange it so you can throw away your mistakes.
Does that mean that the singularity is at least a few decades away?
I sincerely hope so. Look at the progress we’ve made since you wrote this comment. We need to make that much progress several times over before we’re ready to actually start trying to build the things, unless we fancy dying (or worse).
Has anyone tried breeding the smartest nonhuman primates (chimps, bonobos?) for intelligence? If not, what could one expect to achieve by doing this for 10 generations? To what extent are the genes for intelligence additive? That is, if there are multiple distinct genes that increase intelligence via distinct mechanisms, does having all these genes give you the sum of the intelligence boosts of having these genes individually?
“You really don’t want to conjure Azathoth at full power. You really, really don’t. You’ll get more than pretty butterflies.”
How confident are you that there aren’t any mad scientists reading OB, looking for the perfect tool to do something randomly destructive?
Tom_McCabe: If you’re worried about mad-scientist OB readers obtaining the tools for random destruction from this site, you’re too late. Robin_Hanson already gave them the perfect idea.
Picture
On the Enron point:
An article in today’s NY Times claims that a major danger to investment banks is the empowerment and growth of whichever division happens to be benefiting from transitory financial cycles. Since members of a particular department have specialized skills that are less valuable in other areas, they tend to be biased in favor of excessive investment of resources in their areas. If the mortgage market booms for too long you will wind up with a high frequency of mortgage people in the executive corps and reduced ability to cut loose if risks appear dangerously high for the firm.
Supposedly, part of Goldman Sachs’ chart-topping success during the recent credit crunch (although it has been very successful for more or less its entire history) comes from the creation of a powerful independent institution with veto powers and less bias towards particular investment classes.
http://www.nytimes.com/2007/11/19/business/19goldman.html?pagewanted=2&ei=5087&em&en=a3db4f1df6a297ef&ex=1195621200 “At Goldman, the controller’s office—the group responsible for valuing the firm’s huge positions—has 1,100 people including 20 PhDs. If there is a dispute, the controller is always deemed right unless the trading desk can make a convincing case for an alternate valuation. The bank says risk managers swap jobs with traders and bankers over a career and can be paid the same multimillion-dollar salaries as investment bankers.”
Let me answer a slightly different question: how confident are you that the benefits of publicizing the destructive potential of genetic algoritms outweighs the risks?
I am pretty confident that people setting out intentionally to do destruction on the scale addressed here are rare compared to people who do large-scale destruction as an unintentional side effect of trying to do good or at least ethically neutral things. Most evil is done by people who believe themselves to be good and who believe their net-evil deeds are net-good or net-neutral.
People of course differ in their definition of the good, but almost everyone capable of affecting them agree that certain outcomes (e.g. toasting the planet) are evil.
Put more simply, in artificial evolution you get exactly what the fitness function you’ve written asks for, even when you don’t know what it’s actually asking for.
Put more simply, in artificial evolution you get exactly what the fitness function you’ve written asks for
You don’t even necessarily get that. The animal breeders thought they were asking for more eggs. They did get some eggs, but with side effects, and not nearly as many eggs as they could have gotten, if they’d used a different breeding format with the same fitness function: fitness=eggs, vehicle=group instead of fitness=eggs, vehicle=individual.
They could well have gotten fewer eggs by breeding for eggs, as Enron did, if the chickens had discovered enough negative-sum tricks, as Enron did.
Fantastic post!! This certainly applies to the HGP and GMO’s. An excellent page about this: http://www.psrast.org/strohmnewgen.htm Eliezer. there are many things I’m sure we disagree about. But, we must not allow ourselves to become enslaved beyond all desire of escape. Is it OK if I love you for that sentiment?
“Let me answer a slightly different question: how confident are you that the benefits of publicizing the destructive potential of genetic algoritms outweighs the risks?”
I am quite confident of that. I wanted to know how seriously everyone else had considered the risk.
Genetic algorithms have potential, period. It’s human beings that will because that potential to be used unwisely. Publicizing the power of the algorithm might -might- cause enough people to be wary of what they can do for balancing principles to come into play. Trying to limit the knowledge will eventually be more harmful than not.
Tom, I don’t take the risk seriously. Richard Hollerith said well why. I don’t think people who just want to do something randomly destructive are good at the kind of long-range planning and collaboration needed to be seriously threatening, and if they were, they wouldn’t need us to give them extremely general ideas.
I was reminded of something Michael Vassar said on SL4 (emphasis mine):
Most of the truly frightening possibilities are simply too unlikely. To produce them from a genetic algorithm, you’d have to expend massive amounts of resources creating a specific environment that would select for the traits you desired—no mean feat. Imagine what it would take to develop a hypervirulent and extremely lethal plague through an algorithm on purpose. The requirements are crushing.
Making people more aware of such algorithms, and their potential, might force a shift in the way food animals are dealt with so that they don’t act as a optimalization procedure for plagues.
As an evolutionary biologist with an interest in practical applications to agriculture and to human longevity, I think your emphasis on the slow pace of evolution is misplaced. It took most of life’s 3.85 billion year history to evolve multicellularity, but that slowness seems to mainly reflect lack of selection for multicellularity over most of that period. With strong selection, primitive multicellularity can evolve quickly under lab conditions ( Boraas,M.E. 1998 “Phagotrophy by a flagellate selects for colonial prey: A possible origin of multicellularity” and current work in my lab).
Your point about individual vs. group selection is correct and important, though. Individual selection, like free-market competition, is an effective way of making certain kinds of improvements. But some form of group selection (the chicken example, or small-plot trials in plant breeding) is often key to improvements missed by individual-based natural selection. See my 2003 review article and forthcoming book on Darwinian Agriculture.
One more example proving the Mythos rule that one should learn “Bind Azathoth” before casting “Summon Azathoth”
:)
Invocation of hypothetical (often ideologically linked) expectations of the future as if they were deterministic processes is rampant.
(This post could be read as a predecessor to the Immoral Mazes sequence.)
It seems Enron did what corporations normally do, just faster. If I remember correctly the percentage of psychopaths between corporate managers is around six times the normal ratio.
Extremely cool evolution experiment where E. coli bacteria evolve to eat citrate along with many other interesting happenings.