The Baby-Eating Aliens (1/8)
(Part 1 of 8 in “Three Worlds Collide”)
This is a story of an impossible outcome, where AI never worked, molecular nanotechnology never worked, biotechnology only sort-of worked; and yet somehow humanity not only survived, but discovered a way to travel Faster-Than-Light: The past’s Future.
Ships travel through the Alderson starlines, wormholes that appear near stars. The starline network is dense and unpredictable: more than a billion starlines lead away from Sol, but every world explored is so far away as to be outside the range of Earth’s telescopes. Most colony worlds are located only a single jump away from Earth, which remains the center of the human universe.
From the colony system Huygens, the crew of the Giant Science Vessel Impossible Possible World have set out to investigate a starline that flared up with an unprecedented flux of Alderson force before subsiding. Arriving, the Impossible discovers the sparkling debris of a recent nova—and -
“ALIENS!”
Every head swung toward the Sensory console. But after that one cryptic outburst, the Lady Sensory didn’t even look up from her console: her fingers were frantically twitching commands.
There was a strange moment of silence in the Command Conference while every listener thought the same two thoughts in rapid succession:
Is she nuts? You can’t just say “Aliens!”, leave it at that, and expect everyone to believe you. Extraordinary claims require extraordinary evidence -
And then,
They came to look at the nova too!
In a situation like this, it befalls the Conference Chair to speak first.
“What? SHIT!” shouted Akon, who didn’t realize until later that his words would be inscribed for all time in the annals of history. Akon swung around and looked frantically at the main display of the Command Conference. “Where are they?”
The Lady Sensory looked up from her console, fingers still twitching. “I—I don’t know, I just picked up an incoming high-frequency signal—they’re sending us enormous amounts of data, petabytes, I had to clear long-term memory and set up an automatic pipe or risk losing the whole—”
“Found them!” shouted the Lord Programmer. “I searched through our Greater Archive and turned up a program to look for anomalous energy sources near local starlines. It’s from way back from the first days of exploration, but I managed to find an emulation program for—”
“Just show it!” Akon took a deep breath, trying to calm himself.
The main display swiftly scanned across fiery space and settled on… a set of windows into fire, the fire of space shattered by the nova, but then shattered again into triangular shards.
It took Akon a moment to realize that he was looking at an icosahedron of perfect mirrors.
Huh, thought Akon, they’re lower-tech than us. Their own ship, the Impossible, was absorbing the vast quantities of local radiation and dumping it into their Alderson reactor; the mirror-shielding seemed a distinctly inferior solution. Unless that’s what they want us to think...
“Deflectors!” shouted the Lord Pilot suddenly. “Should I put up deflectors?”
“Deflectors?” said Akon, startled.
The Pilot spoke very rapidly. “Sir, we use a self-sustaining Alderson reaction to power our starline jumps and our absorbing shields. That same reaction could be used to emit a directed beam that would snuff a similar reaction—the aliens are putting out their own Alderson emissions, they could snuff our absorbers at any time, and the nova ashes would roast us instantly—unless I configure a deflector—”
The Ship’s Confessor spoke, then. “Have the aliens put up deflectors of their own?”
Akon’s mind seemed to be moving very slowly, and yet the essential thoughts felt, somehow, obvious. “Pilot, set up the deflector program but don’t activate it until I give the word. Sensory, drop everything else and tell me whether the aliens have put up their own deflectors.”
Sensory looked up. Her fingers twitched only briefly through a few short commands. Then, “No,” she said.
“Then I think,” Akon said, though his spine felt frozen solid, “that we should not be the first to put this interaction on a… combative footing. The aliens have made a gesture of goodwill by leaving themselves vulnerable. We must reciprocate.” Surely, no species would advance far enough to colonize space without understanding the logic of the Prisoner’s Dilemma...
“You assume too much,” said the Ship’s Confessor. “They are aliens.”
“Not much goodwill,” said the Pilot. His fingers were twitching, not commands, but almost-commands, subvocal thoughts. “The aliens’ Alderson reaction is weaker than ours by an order of magnitude. We could break any shield they could put up. Unless they struck first. If they leave their deflectors down, they lose nothing, but they invite us to leave our own down—”
“If they were going to strike first,” Akon said, “they could have struck before we even knew they were here. But instead they spoke.” Surely, oh surely, they understand the Prisoner’s Dilemma.
“Maybe they hope to gain information and then kill us,” said the Pilot. “We have technology they want. That enormous message—the only way we could send them an equivalent amount of data would be by dumping our entire Local Archive. They may be hoping that we feel the emotional need to, as you put it, reciprocate—”
“Hold on,” said the Lord Programmer suddenly. “I may have managed to translate their language.”
You could have heard a pin dropping from ten lightyears away.
The Lord Programmer smiled, ever so slightly. “You see, that enormous dump of data they sent us—I think that was their Local Archive, or equivalent. A sizable part of their Net, anyway. Their text, image, and holo formats are utterly straightforward—either they don’t bother compressing anything, or they decompressed it all for us before they sent it. And here’s the thing: back in the Dawn era, when there were multiple human languages, there was this notion that people had of statistical language translation. Now, the classic method used a known corpus of human-translated text. But there were successor methods that tried to extend the translation further, by generating semantic skeletons and trying to map the skeletons themselves onto one another. And there are also ways of automatically looking for similarity between images or holos. Believe it or not, there was a program already in the Archive for trying to find points of linkage between an alien corpus and a human corpus, and then working out from there to map semantic skeletons… and it runs quickly, since it’s designed to work on older computer systems. So I ran the program, it finished, and it’s claiming that it can translate the alien language with 70% confidence. Could be a total bug, of course. But the aliens sent a second message that followed their main data dump—short, looks like text-only. Should I run the translator on that, and put the results on the main display?”
Akon stared at the Lord Programmer, absorbing this, and finally said, “Yes.”
“All right,” said the Lord Programmer, “here goes machine learning,” and his fingers twitched once.
Over the icosahedron of fractured fire, translucent letters appeared:
THIS VESSEL IS THE OPTIMISM OF THE CENTER OF THE VESSEL PERSON
YOU HAVE NOT KICKED US
THEREFORE YOU EAT BABIES
WHAT IS OURS IS YOURS, WHAT IS YOURS IS OURS
“Stop that laughing,” Akon said absentmindedly, “it’s distracting.” The Conference Chair pinched the bridge of his nose. “All right. That doesn’t seem completely random. The first line… is them identifying their ship, maybe. Then the second line says that we haven’t opened fire on them, or that they won’t open fire on us—something like that. The third line, I have absolutely no idea. The fourth… is offering some kind of reciprocal trade—” Akon stopped then. So did the laughter.
“Would you like to send a return message?” said the Lord Programmer.
Everyone looked at him. Then everyone looked at Akon.
Akon thought about that very carefully. Total silence for a lengthy period of time might not be construed as friendly by a race that had just talked at them for petabytes.
“All right,” Akon said. He cleared his throat. “We are still trying to understand your language. We do not understand well. We are trying to translate. We may not translate correctly. These words may not say what we want them to say. Please do not be offended. This is the research vessel named quote Impossible Possible World unquote. We are pleased to meet you. We will assemble data for transmission to you, but do not have it ready.” Akon paused. “Send them that. If you can make your program translate it three different plausible ways, do that too—it may make it clearer that we’re working from an automatic program.”
The Lord Programmer twitched a few more times, then spoke to the Lady Sensory. “Ready.”
“Are you really sure this is a good idea?” said Sensory doubtfully.
Akon sighed. “No. Send the message.”
For twenty seconds after, there was silence. Then new words appeared on the display:
WE ARE GLAD TO SEE YOU CANNOT BE DONE
YOU SPEAK LIKE BABY CRUNCH CRUNCH
WITH BIG ANGELIC POWERS
WE WISH TO SUBSCRIBE TO YOUR NEWSLETTER
“All right,” Akon said, after a while. It seemed, on the whole, a positive response. “I expect a lot of people are eager to look at the alien corpus. But I also need volunteers to hunt for texts and holo files in our own Archive. Which don’t betray the engineering principles behind any technology we’ve had for less than, say,” Akon thought about the mirror shielding and what it implied, “a hundred years. Just showing that it can be done… we won’t try to avoid that, but don’t give away the science...”
A day later, the atmosphere at the Command Conference was considerably more tense.
Bewilderment. Horror. Fear. Numbness. Refusal. And in the distant background, slowly simmering, a dangerous edge of rising righteous fury.
“First of all,” Akon said. “First of all. Does anyone have any plausible hypothesis, any reasonable interpretation of what we know, under which the aliens do not eat their own children?”
“There is always the possibility of misunderstanding,” said the former Lady Psychologist, who was now, suddenly and abruptly, the lead Xenopsychologist of the ship, and therefore of humankind. “But unless the entire corpus they sent us is a fiction… no.”
The alien holos showed tall crystalline insectile creatures, all flat planes and intersecting angles and prismatic refractions, propelling themselves over a field of sharp rocks: the aliens moved like hopping on pogo sticks, bouncing off the ground using projecting limbs that sank into their bodies and then rebounded. There was a cold beauty to the aliens’ crystal bodies and their twisting rotating motions, like screensavers taking on sentient form.
And the aliens bounded over the sharp rocks toward tiny fleeing figures like delicate spherical snowflakes, and grabbed them with pincers, and put them in their mouths. It was a central theme in holo after holo.
The alien brain was much smaller and denser than a human’s. The alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate. They screamed as they vanished into the adult aliens’ maws.
Babies, then, had been a mistranslation: Preteens would have been more accurate.
Still, everyone was calling the aliens Babyeaters.
The children were sentient at the age they were consumed. The text portions of the corpus were very clear about that. It was part of the great, the noble, the most holy sacrifice. And the children were loved: this was part of the central truth of life, that parents could overcome their love and engage in the terrible winnowing. A parent might spawn a hundred children, and only one in a hundred could survive—for otherwise they would die later, of starvation...
When the Babyeaters had come into their power as a technological species, they could have chosen to modify themselves—to prevent all births but one.
But this they did not choose to do.
For that terrible winnowing was the central truth of life, after all.
The one now called Xenopsychologist had arrived to the Huygens system with the first colonization vessel. Since then she had spent over one hundred years practicing the profession of psychology, earning the rare title of Lady. (Most people got fed up and switched careers after no more than fifty, whatever their first intentions.) Now, after all that time, she was simply the Xenopsychologist, no longer a Lady of her profession. Being the first and only Xenopsychologist made no difference; the hundred-year rule for true expertise was not a rule that anyone could suspend. If she was the foremost Xenopsychologist of humankind, then also she was the least, the most foolish and the most ignorant. She was only an apprentice Xenopsychologist, no matter that there were no masters anywhere. In theory, her social status should have been too low to be seated at the Conference Table. In theory.
The Xenopsychologist was two hundred and fifty years old. She looked much older, now, as as she spoke. “In terms of evolutionary psychology… I think I understand what happened. The ancestors of the Babyeaters were a species that gave birth to hundreds of offspring in a spawning season, like Terrestrial fish; what we call r-strategy reproduction. But the ancestral Babyeaters discovered… crystal-tending, a kind of agriculture… long before humans did. They were around as smart as chimpanzees, when they started farming. The adults federated into tribes so they could guard territories and tend crystal. They adapted to pen up their offspring, to keep them around in herds so they could feed them. But they couldn’t produce enough crystal for all the children.
“It’s a truism in evolutionary biology that group selection can’t work among non-relatives. The exception is if there are enforcement mechanisms, punishment for defectors—then there’s no individual advantage to cheating, because you get slapped down. That’s what happened with the Babyeaters. They didn’t restrain their individual reproduction because the more children they put in the tribal pen, the more children of theirs were likely to survive. But the total production of offspring from the tribal pen was greater, if the children were winnowed down, and the survivors got more individual resources and attention afterward. That was how their species began to shift toward a k-strategy, an individual survival strategy. That was the beginning of their culture.
“And anyone who tried to cheat, to hide away a child, or even go easier on their own children during the winnowing—well, the Babyeaters treated the merciful parents the same way that human tribes treat their traitors.
“They developed psychological adaptations for enforcing that, their first great group norm. And those psychological adaptations, those emotions, were reused over the course of their evolution, as the Babyeaters began to adapt to their more complex societies. Honor, friendship, the good of our tribe—the Babyeaters acquired many of the same moral adaptations as humans, but their brains reused the emotional circuitry of infanticide to do it.
“The Babyeater word for good means, literally, to eat children.”
The Xenopsychologist paused there, taking a sip of water. Pale faces looked back at her from around the table.
The Lady Sensory spoke up. “I don’t suppose… we could convince them they were wrong about that?”
The Ship’s Confessor was robed and hooded in silver, indicating that he was there formally as a guardian of sanity. His voice was gentle, though, as he spoke: “I don’t believe that’s how it works.”
“Even if you could persuade them, it might not be a good idea,” said the Xenopsychologist. “If you convinced the Babyeaters to see it our way—that they had committed a wrong of that magnitude—there isn’t anything in the universe that could stop them from hunting down and exterminating themselves. They don’t have a concept of forgiveness; their only notion of why someone might go easy on a transgressor, is to spare an ally, or use them as a puppet, or being too lazy or cowardly to carry out the vengeance. The word for wrong is the same symbol as mercy, you see.” The Xenopsychologist shook her head. “Punishment of non-punishers is very much a way of life, with them. A Manichaean, dualistic view of reality. They may have literally believed that we ate babies, at first, just because we didn’t open fire on them.”
Akon frowned. “Do you really think so? Wouldn’t that make them… well, a bit unimaginative?”
The Ship’s Master of Fandom was there; he spoke up. “I’ve been trying to read Babyeater literature,” he said. “It’s not easy, what with all the translation difficulties,” and he sent a frown at the Lord Programmer, who returned it. “In one sense, we’re lucky enough that the Babyeaters have a concept of fiction, let alone science fiction—”
“Lucky?” said the Lord Pilot. “You’ve got to have an imagination to make it to the stars. The sort of species that wouldn’t invent science fiction, probably wouldn’t even invent the wheel—”
“But,” interrupted the Master, “just as most of their science fiction deals with crystalline entities—the closest they come to postulating human anatomy, in any of the stories I’ve read, was a sort of giant sentient floppy sponge—so too, nearly all of the aliens their explorers meet, eat their own children. I doubt the authors spent much time questioning the assumption; they didn’t want anything so alien that their readers couldn’t empathize. The purpose of storytelling is to stimulate the moral instincts, which is why all stories are fundamentally about personal sacrifice and loss—that’s their theory of literature. Though you can find stories where the wise, benevolent elder aliens explain how the need to control tribal population is the great selective transition, and how no species can possibly evolve sentience and cooperation without eating babies, and even if they did, they would war among themselves and destroy themselves.”
“Hm,” said the Xenopsychologist. “The Babyeaters might not be too far wrong—stop staring at me like that, I don’t mean it that way. I’m just saying, the Babyeater civilization didn’t have all that many wars. In fact, they didn’t have any wars at all after they finished adopting the scientific method. It was the great watershed moment in their history—the notion of a reasonable mistake, that you didn’t have to kill all the adherents of a mistaken hypothesis. Not because you were forgiving them, but because they’d made the mistake by reasoning on insufficient data, rather than any inherent flaw. Up until then, all wars were wars of total extermination—but afterward, the theory was that if a large group of people could all do something wrong, it was probably a reasonable mistake. Their conceptualization of probability theory—of a formally correct way of manipulating uncertainty—was followed by the dawn of their world peace.”
“But then—” said the Lady Sensory.
“Of course,” added the Xenopsychologist, “anyone who departs from the group norm due to an actual inherent flaw still has to be destroyed. And not everyone agreed at first that the scientific method was moral—it does seem to have been highly counterintuitive to them—so their last war was the one where the science-users killed off all the nonscientists. After that, it was world peace.”
“Oh,” said the Lady Sensory softly.
“Yes,” the Xenopsychologist said, “after that, all the Babyeaters banded together as a single super-group that only needed to execute individual heretics. They now have a strong cultural taboo against wars between tribes.”
“Unfortunately,” said the Master of Fandom, “that taboo doesn’t let us off the hook. You can also find science fiction stories—though they’re much rarer—where the Babyeaters and the aliens don’t immediately join together into a greater society. Stories of horrible monsters who don’t eat their children. Monsters who multiply like bacteria, war among themselves like rats, hate all art and beauty, and destroy everything in their pathway. Monsters who have to be exterminated down to the last strand of their DNA—er, last nucleating crystal.”
Akon spoke, then. “I accept full responsibility,” said the Conference Chair, “for the decision to send the Babyeaters the texts and holos we did. But the fact remains that they have more than enough information about us to infer that we don’t eat our children. They may be able to guess how we would see them. And they haven’t sent anything to us, since we began transmitting to them.”
“So the question then is—now what?”
- Deep atheism and AI risk by 4 Jan 2024 18:58 UTC; 149 points) (
- Responses to apparent rationalist confusions about game / decision theory by 30 Aug 2023 22:02 UTC; 142 points) (
- Three Worlds Collide (0/8) by 30 Jan 2009 12:07 UTC; 100 points) (
- Belief in Self-Deception by 5 Mar 2009 15:20 UTC; 100 points) (
- An Outside View on Less Wrong’s Advice by 7 Jul 2011 4:46 UTC; 84 points) (
- Hearsay, Double Hearsay, and Bayesian Updates by 16 Feb 2012 22:31 UTC; 68 points) (
- Deep atheism and AI risk by 4 Jan 2024 18:58 UTC; 64 points) (EA Forum;
- Designing Ritual by 11 Jan 2012 1:52 UTC; 39 points) (
- LessWrong analytics (February 2009 to January 2017) by 16 Apr 2017 22:45 UTC; 32 points) (
- 28 Jul 2013 20:30 UTC; 30 points) 's comment on Arguments Against Speciesism by (
- Morality is not about willpower by 8 Oct 2011 1:33 UTC; 20 points) (
- 18 Mar 2024 14:19 UTC; 19 points) 's comment on Richard_Kennaway’s Shortform by (
- The Categorical Imperative Obscures by 6 Dec 2022 17:48 UTC; 17 points) (
- How much should we care about non-human animals? by 4 Nov 2022 21:36 UTC; 16 points) (
- 28 Jun 2011 13:23 UTC; 15 points) 's comment on Free holiday reading? by (
- Some quick thoughts on “AI is easy to control” by 6 Dec 2023 0:58 UTC; 15 points) (
- 7 Sep 2012 20:22 UTC; 13 points) 's comment on Jews and Nazis: a version of dust specks vs torture by (
- The STEM Attractor by 3 Jun 2022 22:21 UTC; 12 points) (
- The Categorical Imperative Obscures Ethics by 6 Dec 2022 17:48 UTC; 9 points) (EA Forum;
- 17 Jan 2010 18:17 UTC; 9 points) 's comment on The Wannabe Rational by (
- [SEQ RERUN] The Baby-Eating Aliens (1/8) by 13 Feb 2013 5:29 UTC; 9 points) (
- 31 Jul 2011 23:09 UTC; 8 points) 's comment on New Post version 2 (please read this ONLY if your last name beings with l–z) by (
- 5 Mar 2009 14:43 UTC; 8 points) 's comment on No, Really, I’ve Deceived Myself by (
- 22 Apr 2012 23:01 UTC; 8 points) 's comment on Stupid Questions Open Thread Round 2 by (
- 31 Jul 2011 23:14 UTC; 6 points) 's comment on New Post version 2 (please read this ONLY if your last name beings with l–z) by (
- 11 Oct 2014 3:08 UTC; 6 points) 's comment on Open thread, Oct. 6 - Oct. 12, 2014 by (
- Some quick thoughts on “AI is easy to control” by 7 Dec 2023 12:23 UTC; 5 points) (EA Forum;
- 20 Dec 2011 16:08 UTC; 5 points) 's comment on Welcome to Less Wrong! by (
- 31 Aug 2017 14:04 UTC; 4 points) 's comment on Cognitive Science/Psychology As a Neglected Approach to AI Safety by (EA Forum;
- 2 Dec 2010 18:01 UTC; 4 points) 's comment on Is ambition rational? by (
- 12 Feb 2011 5:35 UTC; 4 points) 's comment on An Abortion Dialogue by (
- 20 Oct 2016 7:56 UTC; 3 points) 's comment on Open thread, Oct. 17 - Oct. 23, 2016 by (
- 10 Jul 2012 2:58 UTC; 3 points) 's comment on Less Wrong views on morality? by (
- 28 Mar 2013 2:30 UTC; 3 points) 's comment on Schelling Day: A Rationalist Holiday by (
- 21 Apr 2009 18:16 UTC; 2 points) 's comment on Welcome to Less Wrong! by (
- 6 Aug 2020 14:30 UTC; 2 points) 's comment on Tags Discussion/Talk Thread by (
- 4 Nov 2013 22:32 UTC; 1 point) 's comment on Why didn’t people (apparently?) understand the metaethics sequence? by (
- 3 Jul 2009 19:10 UTC; 1 point) 's comment on Atheism = Untheism + Antitheism by (
- 4 May 2009 23:30 UTC; 1 point) 's comment on The mind-killer by (
- 8 Jun 2010 16:50 UTC; 0 points) 's comment on Open Thread June 2010, Part 2 by (
- 21 Nov 2011 17:06 UTC; 0 points) 's comment on How did you come to find LessWrong? by (
- 27 Apr 2009 12:56 UTC; 0 points) 's comment on Excuse me, would you like to take a survey? by (
- 5 Apr 2010 12:12 UTC; 0 points) 's comment on Rationality quotes: April 2010 by (
- 28 Oct 2012 19:02 UTC; 0 points) 's comment on Wanting to Want by (
- 29 Oct 2010 17:31 UTC; 0 points) 's comment on Levels of Intelligence by (
- 8 Feb 2013 21:48 UTC; 0 points) 's comment on [SEQ RERUN] Failed Utopia #4-2 by (
- 22 Jun 2010 18:38 UTC; 0 points) 's comment on What if AI doesn’t quite go FOOM? by (
- 16 Jun 2009 0:29 UTC; -2 points) 's comment on Let’s reimplement EURISKO! by (
- Deontological Decision Theory and The Solution to Morality by 10 Jan 2011 16:15 UTC; -8 points) (
By far the most enjoyable writing of yours I’ve read. I’d pay to read the rest but since you’re giving it up for free I’ll make a donation to SIAI instead.
Just send the aliens a clip of Monty Python’s “Every Sperm is Sacred” and convince them we’re so advanced we kill thousands of children before they’re even conceived. Problem solved! ;)
Ah yes! A mere matter of efficiency.
Stellar, quite literally. Cannot wait for the next installments. You must eat your babies.
It’s good. Not baby-eatin’ good, but good enough ;).
Amazing.
I wonder if they discovered libertarianism, and if they are able to think in alternative ethic systems.
Similar: how do they treat abortion, freezing of embryos and such.
Anyone see similarities to the mooties from ‘The Mote in God’s Eye’?
Martin
I was going to say that this (although very good) wasn’t quite Weird enough for your purposes; the principal value of the Baby-Eaters seems to be “individual sacrifice on behalf of the group”, which we’re all too familiar with. I can grok their situation well enough to empathize quickly with the Baby-Eaters. I’d have hoped for something even more foreign at first sight.
Then I checked out the story title again.
Eagerly awaiting the next installments!
Martin, the Baby-eaters don’t remind me of the Moties. I think it’s Eli’s use of the Alderson Drive that reminded you of The Mote in God’s Eye.
Akon’s first line is objectively the right thing to say in that situation. One might append “holy.”
Akon’s first line at the Command Conference is also pure gold.
I expect it would make writing the series vastly more difficult, but I so much wanted to see a group of choose-your-own-adventure options at the bottom, for what the crew should do.
Why didn’t the Babyeaters develop the practice of separate pens for each family, with tribes redistributing common resources (e.g. erratic, potentially rotting, meat from hunts) among parents, and parents feeding children out of their share? Maybe their brains lacked the capacity to recognize so many distinct off-spring, but why not spray them with a pheromone? Producing vast numbers of offspring with big expensive full-size brains (which is itself implausible) makes the large numb to be destroyed immediately would impose huge metabolic costs relative to privatizing the commons and distinguishing between offspring, then adjusting clutch-size based on parental resources.
Nope :-). I thought about the vast reproduction rate both species share. And since the german title of the book and name for the Moties is different i had to look it up.
Martin
“makes the large numb” Is obviously a result of an incomplete edit.
Eliezer: cool story idea. Wait, how did they manage to avoid developing a notion of forgiveness in some form? I mean, isn’t that more or less required to stabilize out of sync tit-for-tat oscillations? Or am I completely wrong on this?
tit-for-tat oscillations / cycles of vengeance happen with humans because we’re naturally averse to killing each other, even those we don’t like. So we tend to leave survivors.
There are two ways to solve the issue: Forgiveness to reset the scales, or wholesale extermination of the other group and all their allies. The baby-eaters settled on the latter.
I wonder why the babies don’t eat each other. There must be a huge selective pressure to winnow down your fellows to the point where you don’t need to be winnowed. This would in turn select for small brained, large and quick growing at the least. There might also be selective pressure to be partially distrusting of your fellows (assuming there was some cooperation), which might follow over into adulthood.
I also agree with the points Carl raised. It doesn’t seem very evolutionarily plausible.
Good stuff, Eliezer. You have no idea how happy I am that this is part 1 of 8. Eight!!
Patrick, the way I understand the aliens’ psychology, it’s not that they eat babies because their terminal value is “the group comes before the individual”, it’s that their terminal value is “it’s good to eat babies”. That this was good for their group(s) in the early history of their species is the explanation for why they have this terminal value, but it doesn’t factor in their moral reasoning.
Is this a different story from the one that was supposed to make us go insane?
If the aliens’ wetware (er, crystalware) is so efficient that their children are already sentient when they are still tiny relative to adults, why don’t the adults have bigger brains and be much more intelligent than humans? Given that they also place high values on science and rationality, had invented agriculture long before humans did, and haven’t fought any destructive wars recently, it makes no sense that they have a lower level of technology than humans at this point.
Other than that, I think the story is not implausible. The basic lesson here is the same as in Robin’s upload scenarios: when sentience is really cheap, no one will be valued (much) just for being sentient. If we want people to be valued just for being sentient, either the wetware/crystalware/hardware can’t be too efficient, or we need to impose some kind of artificial scarcity on sentience.
Kevin, yes, a different story.
Why doesn’t modern society securitize hard assets into money of zero maturity, instead of using a purely abstract debt-based currency to denominate debts? Because it would be slightly more complicated, that’s why.
Evolution doesn’t do a lot of stuff that you think is a good idea. Even somewhat-intelligent designers don’t do a lot of things that would be good ideas.
It was specified that Babyeater brains are small by nature, so that children already have small cheap full-size brains.
Why don’t humans have bigger brains and be much more intelligent than humans? Because (a) our brains don’t scale that easily—if they did, we’d evolve to get around the hip size thing somehow, the selection pressures would be enormous. And (b) because as soon as we hit the minimum possible level to get by with, we erupted out into a technological civilization.
Assume the Babyeater crystal brains are small, but architecturally subject to the limitation that every other element be in fast communication with every other element (in the human brain, a neuron is within one clock tick of every other neuron at myelinated axon speeds). Or that it biologically can’t scale without internal interference / noise crushing it.
See also: MST3K Mantra.
How is it that these aliens’ anatomy is so radically different from humans’, yet they have a word for “kick”?
WE WISH TO SUBSCRIBE TO YOUR NEWSLETTER
Laughed at loud at this point.
Great stuff, I was disappointed when I reached the end and realized I have to wait for the next day.
My only complaint is that this is apparently going to be only eight parts and not a full-length novel, but maybe you can be forgiven for that.
They didn’t stabilize tit-for-tat, they completely eliminated the other side in any dispute. I guess they could quickly repopulate even after the most devastating wars by just not eating so many babies. This kept happening until the scientist Babyeaters killed off all non-scientist tribes. The scientists aren’t forgiving, they just understand that rational beings can make incorrect decisions when given incomplete information. Like the story says, “anyone who departs from the group norm due to an actual inherent flaw still has to be destroyed.”
Does the Babyeater morality emphasize the consumption and digestion of babies or is it simply the winnowing that they value? If it’s the latter our biologies are probably different enough that one could fudge the translation of some texts about contraception and abortion to make it look like we winnow. It just turns out that we destroy our young even earlier and do so by prohibiting them from combining another sort of baby- which they need to survive. Sometimes when they do combine we destroy them anyway.
Not to be crude, but maybe the aliens would enjoy some of our oral sex pornography.
Just as human individuals change their behavior and outlook when they are associating with different groups- you’re an ass around your college friends but a gentlemen around the ladies- so it makes sense for the species to act differently around aliens with different cultural and moral norms. In this case we should exaggerate the role contraception and abortion plays in human civilization and fudge the language so it looks like we’re killing babies rather then just sperm and zygotes. It is precisely because our biology is so different that such a mistranslation won’t be caught. We might as well call our sperm “baby”- all the translation so far as been inexact enough to permit this, surely “baby” doesn’t have to entail a fully developed brain.
Moreover, the aliens must still have some analogy for our love and to-our-deaths willingness to protect our young since any deaths AFTER the winnowing would likely be viewed as devastatingly unfortunate. Does the winnowing even coincide with the surviving babies immediately undergoing some drastic biological change? The only real sense in which Babyeater morality differs from ours is the time during the development of the individual when the individual is declared by society to be morally valuable.
Re: “MST3K Mantra”
Illustrative fiction is a tricky business, if this is to be part of your message to the world it should be as coherent as possible, so you aren’t accidentally lying to make a better story.
If it is just a bit of fun, I’ll relax.
“Why doesn’t modern society securitize hard assets into money of zero maturity, instead of using a purely abstract debt-based currency to denominate debts? Because it would be slightly more complicated, that’s why.” Eliezer,
I think you’re mistaken about the relative complexity of parents selectively provisioning their own offspring, versus the baroque and complex adaptations for social intelligence and coordination required for this system to be stable.
“And anyone who tried to cheat, to hide away a child, or even go easier on their own children during the winnowing—well, the Babyeaters treated the merciful parents the same way that human tribes treat their traitors.”
This means that the Babyeaters were capable of recognizing and preferring their own children after birth. Selectively provisioning your own offspring is an extremely common adaptation, as is allocating resources preferentially (e.g. starving runts) and most of the necessary complexity already seems to exist among the Babyeaters. Separate pens/nests are simpler than evolving a complex set of adaptations to manage and enforce an even-handed winnowing.
Consider that with pooled offspring in a single pen, we now have two commons problems, aside from even-handed winnowing, Babyeaters have strong incentives to shirk in their agricultural labor. For the Babyeaters to develop a set of immensely powerful adaptations for managing such conflicts of interest (exceedingly strong by the standards of Earth’s biodiversity) is going to take evolution a long time, during which selective provisioning/penning/devouring would likely take hold in some groups and then sweep the population.
How can I put it? If I were to describe anything that couldn’t happen from an evolutionary perspective, that would be cheating. At the same time, I needed baby-eating aliens for my story, so I wrote the bottom line first and then put the ‘explanation’ on the lines above. Though it’s worth noting that cannibalism was the result of group selection in a laboratory environment; see The Tragedy of Group Selectionism. And it’s also worth noting that the point I’m using the Babyeaters to make, is one that I happen to actually believe to be true.
There’s a point up to which you can question the story and get back plausible-sounding answers that fit with everything that Eliezer Yudkowsky happens to know about evolutionary biology. I actually do think about the sort of questions that get asked here, in the unwritten backdrop of the story. The Babyeaters are, so far as I know, allowed; if I ran into them, I wouldn’t point to any particular facet of evolutionary biology and say, “My gosh, this has been falsified!” But there’s also a point beyond which the true causal origin of your observations is that Eliezer Yudkowsky wanted baby-eating aliens and then rationalized a plausible-sounding evolutionary history. This is the point at which you invoke the MST3K Mantra.
Re: “MST3K Mantra”
Very improbable evolved beings don’t make for good warnings about the precious moral miracle of human values. It would be better to use an example of a plausible ‘near-miss,’ e.g. by extrapolating from something common in Earth species.
Carl, the essential premise of the Babyeaters is “among chimp-level creatures who’ve previously developed strong social recognition and reputation tracking, group selection can actually take hold via punishment of nonpunishers and extermination warfare between tribes”.
Then you ask what kind of aliens you might run into as a result.
If you told me that this actually happened, I would not see it as contradicting any particular evolutionary biology that I know of. No, group selection doesn’t usually happen in Nature, but you don’t usually have strong reputation tracking and individual recognition and punishment of nonpunishers and extermination warfare either. Babyeaters can be stable against the kind of invasions you describe, I think, if they already have a punishment-of-nonpunishers system going, plus sufficiently frequent extermination warfare against other tribes. Note that the Babyeaters don’t have bipolar sexuality, so defeated tribes really will be wiped out, rather than just the men being wiped out.
Even relatively strong social recognition and coordination systems, as in primates, leave plenty of opportunities to shirk and betray. Behaviors of selective provisioning and parental investment (the cheating that already sometimes occurs and is punished among Babyeaters) serves both group and individual fitness, reducing the strength of group selection needed to maintain the altruistic punishment of shirkers. It would thus be easier for it to evolve, and groups of selective-provisioners would on average have a competitive advantage (since the group-beneficial slow population growth would degrade more slowly) against groups with the dispositions in the story.
Now, if the social coordination mechanisms got absurdly strong, much stronger than in any human society ever, this would no longer be the case. Likewise, if the story’s babyeaters became universal, selective-provisioners would not be able to arise among them. So there is no contradiction, but there is a probabilistic surprise.
Great bouncing Bayesian Babyeater babies Batman!
Eliezer I sincerely hope we dont chop up your story before its even completely posted. A certain amount of Suspension of disbelieve is needed anyway.
And i sense a new inside joke coming up.
I can say thats: double-plus-baby-eating?
Martin
PS: there is no evidence for any kind of FTL actually working, scnc, argh, stop beating be with your narn bat squats aaaaaaa.-
Eliezer I sincerely hope we dont chop up your story before its even completely posted. A certain amount of Suspension of disbelieve is needed anyway.
And i sense a new inside joke coming up.
I can say thats: double-plus-baby-eating?
Martin
PS: there is no evidence for any kind of FTL actually working, scnc, argh, stop beating be with your narn bat squats aaaaaaa.-
Eliezer I sincerely hope we dont chop up your story before its even completely posted. A certain amount of Suspension of disbelieve is needed anyway.
And i sense a new inside joke coming up.
I can say thats: double-plus-baby-eating?
Martin
PS: there is no evidence for any kind of FTL actually working, scnc, argh, stop beating be with your narn bat squats aaaaaaa.-
Carl, why doesn’t your logic rule out ant colonies of only 3⁄4 related individuals?
Given ant chromosomal structure, an ant is more related to her sisters than her offspring, and a single female can convert food/resources to offspring roughly as well as two females each with half the resources.
I think I remember reading once that ant colonies do, in fact, produce worker ants that “cheat” and attempt to reproduce, while other ants enforce the “cooperative” status quo.
I.e. sister ants with their parents alive don’t need complex social recognition and punishment mechanisms to deal with conflicting individual and group interests, since their best outcomes coincide. That coincidence of interests can be almost as complete as for a group of clones.
Eliezer: with regards to the MST3K mantra, all I have to say is “But this way is more fun!” :)
As Doug observes, worker ants may indeed cheat and try to reproduce male offspring, to whom they are apparently more closely “related” for such purposes than they are related to the queen. Googling around on ant genetic conflicts also produced this paper on how even a group of clones apparently needed to police reproduction. That part I don’t quite get, but the summary of standard relationships says:
If the tight but not perfect relatedness of workers in an ant colony can support cooperation and reproductive policing, I don’t think I’m being that crazy for hypothesizing that chimp-level Babyeaters can do the same using punishment of nonpunishers. Of course I have not actually observed them. I am using the rationalizing part of my brain here. And I’m not sure I should ever dare attach any real credence to that. I am only saying that I should be able to get away with it as fiction. Real life? I’d have to rethink that from scratch.
Within the bounds of fictional rationalization, I obviously assume that there are economies of scale in Babyeater crystal-tending and child-tending to match the commons problems, as otherwise the group size would tend to shrink and eliminate conflicts of interests that way. Hence the lack of individually tended pens. Maybe then the Crystal Dragons come by and eat all the babies, or something.
Once you’re in that equilibrium, though, spawning fewer offspring is an individual disadvantage even though it’s a group advantage, and any move in the direction of selectively provisioning your own offspring will be treated as defection and punished. I don’t understand why you think that provisioning your own offspring is a group advantage.
Actually, babyeating in the common pen isn’t even internally stable. Let’s take the assumptions of the situation as given:
There is intertribal extermination warfare. Larger tribes tend to win and grow. Even division of food among excessive numbers of offspring results in fewer surviving adults, and thus slower tribal population growth and more likely extermination.
All offspring are placed in a common pen.
Food placed in the common pen is automatically equally divided among those in the pen and adults cannot selectively provision.
Group selection has resulted in collective enforced babyeating to reduce offspring numbers (without regard for parentage of the offspring) in the common pen to the level that will maximize the number of surviving adults given the availability of food resources.
Individuals vary genetically in ways that affect their relative investment in producing offspring and in agricultural production to place into the common pen.
Under these circumstances, there will be intense selective pressure for individuals that put all their energy (after survival) into producing more offspring (which directly increase their reproductive fitness) rather than agricultural production (which is divided between their offspring and the offspring of the rest of the tribe). As more and more offspring are produced (in metabolically wasteful fashion) and less and less food is available, the tribe is on the path to extinction.
Groups that survive will be those in which social intelligence is used to punish (by death, devouring of offspring before they are placed in the pen, etc) those making low food contributions relative to offspring production. Remembering offspring production would be cognitively demanding, and only one side of the tradeoff needs to be measured, so we can guess that punishment of those making small food contributions would develop. This would force a homogenous level of reproductive effort, and group selection would push this level to the optimal tradeoff between agriculture and offspring production for group population growth, with just enough offspring to make optimal use of the food supply. This group is internally stable, and has much higher population growth than one wracked by commons problems, but it will also have no babyeating in the common pen.
“I don’t understand why you think that provisioning your own offspring is a group advantage.” If parents could selectively provision their own offspring in the common pen, then the group would not be wracked by intense commons-problem selective pressures driving provisioning towards zero and reproduction towards the maximum (thus resulting in extermination by more numerous tribes).
Carl, it seems to me that a lot of what we’re discussing here has analogues in human food-sharing. Babyeaters who contribute more food to the pen might have higher tribal status. Punishing low contributors to the pen doesn’t force a homogenous level of reproductive effort, either. Suppose that all Babyeaters make equal contributions to the food pen; their leftover (variance in) food resources could be used to grow their own bodies, bribe desirable mates (those of good genetic material as witnessed by their large food contributions), or create larger numbers of offspring.
Under the circumstances, I also have to ask if you’re personally alarmed at the prospect of running into actual Babyeaters some day, or the universe actually looking like that; or if you’re just calmly picking nits.
Given that it’s Carl, and that the nits sound pretty plausible, I’m guessing the latter. Personally though, given the LARGE number of fantasy assumptions in this story, most importantly FTL and synchronized ascent sentience so perfectly timed that neither humans nor baby-eaters expanded to fill one-another’s space first even given FTL, I think we have to assume the MST3K mantra is in fairly full effect.
Eliezer, you’re right that the coordination mechanisms would be imperfect, so it’s an overstatement to say NO babyeating would occur, I meant that you wouldn’t have the ‘winnowing’ sort of babyeating with consistent orders-of-magnitude disproportions between pre- and post-babyeating offspring populations.
Nits. I’d say there are probably lots of at-least-Babyeater-level-abhorrent evolutionary paths (not that Babyeaters are that bad, I’d rather have a Babyeater world than paperclips) making up a big share of evolved civilizations (it looks like the great majority, but it’s very tough to be confident). Any lack of calm is irritation at the use of a dubious example of abhorrent evolved morality when you could have used one that was both more probable AND more abhorrent.
Michael,
I guess it depends on whether the fantastic element can adequately stand in for whatever it is supposed to represent. Magic starship physics can be used to create a Prisoner’s Dilemma without trouble, since PDs are well understood, and it’s fairly obvious that we will face them in the future. No-Singularity and FTL, so that we can have human characters, are also understandable as translation tools. If Babyeaters are a stand-in for ‘abhorrent alien evolved morality’ to an audience that already grasps the topic, then the details of their evolution don’t matter. If, however, they are supposed to make the possibility of a nasty evolved morality come alive to cosmopolitan optimistic science fictions fans or transhumanists, then they should be relatively probable.
Eliezer,
On the other hand, since you’ve already written the story, using one of your favorite examples of the nonanthropomorphic nature of evolution as inspiration for the Babyeaters, and have no authorial line of retreat available at this time, we can probably leave this horse for dead.
Michael, don’t forget the “machine translation” algorithm.
I fear that you have not managed to convince me of this. If the general idiom of children in pens is stable, then the adults contributing lots and lots of children (as many as possible) is also evolutionarily stable.
You say this even after reading Part 2, about the Babyeater children—not infants, preteens, “Baby” is said to be a mistranslation—slowly dying in their parents’ stomachs?
I’d take the paperclips, so long as it wasn’t running any sentient simulations.
(1) Name one (both more probable and more abhorrent).
(2) A basic technique in literature is that while a battle between Good and Evil can sometimes be made riveting, what can be even more involving is a battle between Good and Good—then the audience has to choose sides, and the “correct” side should not be made so obvious. If the Babyeaters were orcs the story would be simple: fight them, wipe them out! Because the Babyeaters are not orcs, the question of what to do with them is much more difficult. This is the true application of the principle that stories are about conflict.
A vast region of paperclips could conceivably after billions of years evolve into something interesting, so let us stipulate that the paperclipper wants the vast region to remain paperclips, so it remains to watch over its paperclips. Better yet, replace the paperclipper with a superintelligence that wants to pile all the matter it can reach into supermassive black holes; supermassive black holes with no ordinary matter nearby cannot evolved or be turned into anything interesting unless our model of fundamental reality is fundamentally wrong.
My question to Eliezer is, Would you take the supermassive black holes over the Babyeaters so long as the AI making the supermassive black holes is not running sentient simulations?
“I fear that you have not managed to convince me of this. If the general idiom of children in pens is stable, then the adults contributing lots and lots of children (as many as possible) is also evolutionarily stable.”
I have a tribe of Babyeaters that each put 90% of their effort into reproducing, and 10% into contributing to the common food supply of the pen. This winds up producing 5000 offspring, 30 of which are not eaten, and are just adequately fed by the 10% of total resources allocated to the food supply. Now consider an allele, X, that disposes carriers to engage in altruistic punishment, and punishment of non-punishers, in support of a norm that adults spend most of their effort on contributing to the food supply (redirecting energy previously spent on offspring to be devoured with thermodynamic losses to the production and maintenance of offspring that will grow into adults). Every individual in the tribe will tend to have more surviving offspring, and the group will tend to be victorious in intertribal extermination warfare. Group selection will thus favor the spread of X, probably quite a bit more strongly than it would favor the spread of an allele for support of the babyeating norm (X achieves the benefits of babyeating while reallocating metabolic waste on devoured babies). The more closely X aligns offspring production and food contribution, the more it will be spread by group selection and the more it will reduce babyeating.
In a world with many groups, all engaging in winnowing-level babyeating, allele X can enter, spread, and vastly reduce babyeating. What is unconvincing about that argument?
“Suppose that all Babyeaters make equal contributions to the food pen; their leftover (variance in) food resources could be used to grow their own bodies, bribe desirable mates (those of good genetic material as witnessed by their large food contributions), or create larger numbers of offspring.”
Different alleles might drive altruistic punishment (including of non-punishers) in support of many different levels of demand on tribe members. Group selection would support alleles supporting norms such that the mean contribution to the pen food supply was well-matched with the mean number of offspring contributed to the pen. Variance doesn’t invalidate that conclusion.
Richard, I’d take the black holes of course.
Carl, one of the root assumptions here is that infants are much cheaper to produce than preteens are to feed. The Babyeater children are eliminated at just the stage before they begin quickly growing and consuming lots of food (but not, alas, before the stage before they become sentient). If most of the total cost of growing a child lies in feeding it past the rapid growth stage, rather than birthing 50 infants and feeding them up to that point, then tribes that birth fewer infants will not have much of an advantage. It’s even possible that the reduced selection pressure (weeding out poor immune systems, dumb kids, etcetera) would become significant at this point in terms of both individual and group advantage.
Furthermore, to the question “Why didn’t evolution make improvement X?”, “It just didn’t” is often a pretty good response. The mutation you postulate does involve more than one change—even if the Babyeaters seem well-predisposed to it in terms of preadaptation, it might just not happen. You’re also postulating that a whole group gets this mutation in one shot—but even if you say “genetic drift”, it seems pretty disadvantageous to a single invader. They’ll just suddenly classify a bunch of others as evil, and so be cast out themselves.
“If most of the total cost of growing a child lies in feeding it past the rapid growth stage, rather than birthing 50 infants and feeding them up to that point,”
From their visibility in the transmitted images it seems the disproportion isn’t absurdly great. Also, if the scaling issues with their brains were so extreme, why didn’t they become dwarfs? One big tool-using crystal being versus 500 tool-using dwarfs of equal intelligence seems like bad news for the giant.
“You’re also postulating that a whole group gets this mutation in one shot—but even if you say “genetic drift”, it seems pretty disadvantageous to a single invader.”
Altruistic punishers don’t need to be common, one or two can coordinate a group (the altruistic punisher recruits with the credible threat of punishment, and then imposes the norm on the whole group), and an allele for increased provisioning wouldn’t directly conflict with babyeating instincts.
And again, babyeating norms need to invade in a similar fashion, and without norms other than baby-eating, the communal feeding pen selects for zero provisioning effort.
As I expected. Much you (Eliezer) have written entails it, but it still gives me a shock because piling as much ordinary matter as possible into supermassive black holes is the most evil end I have been able to imagine. In contrast, suffering is merely subjective experience and consequently, according to my way of assigning value, unimportant.
Transforming ordinary matter into mass inside a black hole is a very potent means to create free energy, and I can imagine applying that free energy to ends that justify the means. But to put ordinary matter and radiation into black holes massive enough that the mass will never come back out as Hawking radiation as an end in itself—horror!
Hollerith, you are now officially as weird as a Yudkowskian alien. If I ever write this species I’ll name it after you.
Carl, I realize that I am postulating a sort of complicated and difficult adaptation, and then supposing that a comparatively simpler adaptation did not follow it.
And if I were writing my own story that way, and then criticizing you for writing your own story the other way, that would be unfair.
But Carl, this sort of thing does happen in real-world biology; there are adaptations that seem complicated to us, which fail to improve in ways that seem like they ought to have been relatively simpler even for natural selection. It happens. I am not yet ashamed of using this as a fictional premise.
And I’ll also repeat the question for what you think would be a more probable more evil alien—bearing in mind that the Babyeaters aren’t supposed to be completely evil, but anyway it’s an interesting question.
Hollerith, you are now officially as weird as a Yudkowskian alien. If I ever write this species I’ll name it after you.
Eliezer, to which of the following possibilities would you accord significant probability mass? (1) Richard Hollerith would change his stated preferences if he knew more and thought faster, for all reasonable meanings of “knew more and thought faster”; (2) There’s a reasonable notion of extrapolation under which all normal humans would agree with a goal in the vicinity of Richard Hollerith’s stated goal; (3) There exist relatively normal (non-terribly-mutated) current humans A and B, and reasonable notions of extrapolation X and Y, such that “A’s preferences under extrapolation-notion X” and “B’s preferences under extrapolation-notion Y” differ as radically as your and Richard Holleriths preferences appear to diverge.
Anna, talking about all reasonable meanings of “knew more and thought faster” is a very strong condition.
I would guess… call it 95% probability that a substantial fraction of reasonable construals of “knew more, thought faster” would deconvert the extrapolated Hollerith, and maybe 80% probability that most reasonable construals would so deconvert him. (2) gets negligible probability mass (if Hollerith got to a consistent place, he got there by an unusual sequence of adopted propositional moral beliefs with many degrees of freedom) and so (3) by subtraction.
I would greatly prefer that there be Babyeaters, or even to be a Babyeater myself, than the black hole scenario, or a paperclipper scenario. This strongly suggests that human morality is not as unified as Eliezer believes it is… like I’ve said before, he will horrified by the results of CEV.
Or the other possibility is just that I’m not human.
Let me clarify that what horrifies me is the loss of potential. Once our space-time continuum becomes a bunch of supermassive black holes, it remains that way till the end of time. It is the condition of maximum physical entropy (according to Penrose). Suffering on the other hand is impermanent. Ever had a really bad cold or flu? One day you wake up and it is gone and the future is just as bright as it would have been if the cold had never been.
And pulling numbers (80%, 95%) out of the air on this question is absurd.
Unknown, how certain are you that you would retain that preference if you “knew more, thought faster”? How certain are you that Eliezer would retain the opposite preference and that we are looking at real divergence? I have little faith in my initial impressions concerning Babyeaters vs. black holes; it’s hard for me to understand the Babyeater suffering, or the richness of their lives vs. of black holes, as more than a statistic.
Eliezer, regarding (2), it seems plausible to me (I’d assign perhaps 10% probability mass) that if there is a well-formed goal that with the non-arbitrariness property that both Hollerith and Roko seem partly to be after, there is a reasonable notion of extrapolation (though probably a minority of such notions) under which 95% of humans would converge to that goal. Yes, Hollerith got there by a low-probability path; but the non-arbitrariness he is (sort of) aiming for, if realizable, suggests his aim could be gotten to by other paths as well. And there are variables in one’s choice of “reasonable” notions of extrapolation that could be chosen to make non-arbitrariness more plausible. For example, one could give more weight to less arbitrary preferences (e.g., to whatever human tendencies lead us to appreciate Go or the integers or other parts of mathematics), or to types of value-shifts that to make our values less arbitrary (e.g., to preferences for values with deep coherence, or to preferences similar to a revulsion for lost purposes), or one could include father-back physical processes (e.g., biological evolution, gamma rays) as part of the “person” one is extrapolating.
[I realize the above claim differs from my original (2).] Do you disagree?
Richard, I don’t see why pulling numbers out of the air is absurd. We’re all taking action in the face of uncertainty. If we put numbers on our uncertainty, we give others more opportunity to point out problems in our models so we can learn (e.g., it’s easier to notice if we’re assigning too-high probabilities to conjunctions and so having probabilities that sum to more than 1).
I would greatly prefer that there be Babyeaters, or even to be a Babyeater myself, than the black hole scenario, or a paperclipper scenario.
Seems to me it depends on the parameter values.
Can a preference against arbitrariness ever be stable? Non-arbitrariness seems like a pretty arbitrary thing to care about.
Instead of describing my normative reasoning as guided by the criterion of non-arbitrariness, I prefer to describe it as guided by the criterion of minimizing or pessimizing algorithmic complexity. And that is a reply to steven’s question right above: there is nothing unstable or logically inconsistent about my criterion for the same reason that there is nothing unstable about Occam’s Razor.
Roko BTW had a conversion experience and now praises CEV and the Fun Theory sequence.
Anna, it takes very little effort to rattle off a numerical probability—and then most readers take away an impression (usually false) of precision of thought.
At the start of Causality Judea Pearl explains why humans (should and usually do) use “causal” concepts rather than “statistical” ones. Although I do not recall whether he comes right out and says it, I definitely took away from Pearl the heuristic that stating your probability about some question is basically useless unless you also state the calculation that led to the number. I do recall that stating a number is clearly what Pearl defines as a statistical statement rather than a causal statement. What you should usually do instead of stating a probability estimate is to share with your readers the parts of your causal graph that most directly impinges on the question under discussion.
So, unless Eliezer goes on to list one or more factors that he believes would cause a human to convert to or convert away from my system of valuing things (namely, goal system zero or GSZ) or one or more factors that he believes would tend to prevents other factors from causing a conversion to or away from GSZ, I am going to go on believing that Eliezer has probably not reflected enough on the question for his numbers to be worth anything and that he is just blowing me off.
In summary, I tend to think that most uses of numerical probabilities on these pages have been useless. On this particular question I am particularly sceptical because Eliezer has exhibited signs (which I am prepared to describe if asked) that he has not reflected enough on goal system zero to understand it well enough to make any numerical probability estimate about it.
I am busy with an urgency today, so I might take 24 h to reply to replies to this.
Eliezer, if I understand you correctly, you would prefer a universe tiled with paperclips to one containing both a human civilization and a babyeating one. Let us say the babyeating captain shares your preference, and you and he have common knowledge of both these preferences.
Would you now press a button exterminating humanity?
I’ve not read this all the way through yet, but I want to add that space travel would seem a great deal more appealing were there Mistress of Fandom positions available.
I should think it obvious that if there are Masters of Fandom, there are Mistresses of Fandom.
Though Google turns up only 4 hits for “Secret Mistress of Fandom”, which may imply that “Secret Master of Fandom” is considered a gender-free term.
The anthropological theory of Rene Girard suggests that our culture (and religion or rather religion and culture) has roots in organizing human groups for a proper lynch of a chosen individual. This shocking revelation is not that far from the fiction you created here—I wonder if it was the inspiration (but then it would be in conflict with your usual anti-Christianity stance).
You lost me
Greg Gurevich
Eleizer wasn’t the first to think of this sort of thing:
http://en.wikipedia.org/wiki/A_Modest_Proposal
The ship employs a Master of Fandom? (Not a Secret Master, obviously; they’re too hard to recruit.)
I once had a chat with Dan Alderson (1941–89) about his “tramlines” (as he called them). They follow gradients of the “fifth force” field; traversible tramlines pass through saddle-points (because potential energy must be conserved). The fifth force is generated by stars in proportion to some power of their luminosity. So, to a rough approximation and with some simplifying assumptions, a traversible tramline exists between two stars if there is no point on the line between them at which some third star appears brighter. Thus they join neighboring stars, not random ones (like wormholes in the Barrayar universe).
Well humans are baby eaters in an abstract sense. We wait until the population of grown up babies gets so big that we are competing for diminishing resources then we kill off the excess. Same thing from 30,000 ft.
I don’t think you got the main point of babyeaters : they don’t eat their babies (or let them die) because they don’t know how to do otherwise (by lacking the technological skill, or because of suboptimal economy), because they consider it to be the most ethical thing to do.
No sane human will tell you that killing children or letting starve is an ethical thing to do. Some will tell you it’s an horrible thing that must be prevented and give to charity to avoid it, some will tell you it’s sad but we can’t do much. Some may even tell you it’s sad but required due to our current technological level. But none will tell you it’s a good thing, and that we shouldn’t prevent it if we had a sure way to do it. No sane human would actually oppose an alien race offering to save from death all starving human children.
Perhaps not. And of course, in this story there are no starving human children. But by the end of the story, we confirm that there are still suffering human children. Would any sane human oppose an alien race offering to save all human children from suffering? How about one whose job is ensuring sanity in a story written by the eminently sane Eliezer Yudkowky?
Which makes me suspect that some sane human somewhere would also oppose an alien race offering to save human children from death. It’s not that far different.
Agreed
Did you go through and downvote everything I posted? I find it interesting that I had positive karma until I responded to your post wtih a disagreement. Poor form if you did.
No, I didn’t. I would never do such a thing. And if you look at your comments, you made 8 comments, and your karma is −10, so it just couldn’t be possible for me to make you reach −10 if you were in positive. Do your maths before accusing people ;)
This would not refute xxd’s claim in the way you seem to be declaring.
And I don’t think you got my abstraction. I get perfectly well that the baby eating aliens consider it to be ethical.
And there are indeed plenty of sane humans who vote for killing unborn babies by rationalizing that they are not babies. I’m OK with that because taking emotion out of the picture we don’t want to outbreed our food supplies.
That leads however to the uncomfortable logical position of equating murder for resources as the same thing as killing off excess babies in order to limit the population. They are exactly the same thing from a logical standpoint.
And arguing that no sane human would oppose an alien race offering to save from death all starving human children means there is obviously enough food for all of them. It’s a straw man argument.
The real argument is this: there isn’t enough food. Do we kill some of the children (or grown up children) in order that the remaining food supplies stretch for the smaller population or do we let the children starve.
That situation has come up over and over again in history unless you are wilfully ignorant of the past.
For all the votes about the legality of such things, I don’t recall any votes for or against killing any unborn babies.
Abortion has nothing to do with murder. An embryo has no brain, no will, no feeling, it’s not a person. You just can’t compare that with the baby-eaters who eat children who have feeling and will, who try to escape death, who beg for mercy, cry from fear, suffer, …
And abortion is usually not done for resource issues. In France for example, a mother can use “accouchement sous X” in which the baby is given to foster parents at birth and the biological parents identity is wept from all records except a very secured record that can only be opened for strong medical reasons. And the state does its best to encourage families to have children. Abortion is a right because you can’t force a woman to carry a pregnancy against her will, but definitely not an attempt to lower resource usage.
And once again you didn’t get the point of the babyeaters. They have the ability to feed their children. They are a space-faring race with advanced technology (even us, with current technology, could feed billions). They have the possibility to genetically engineer themselves to create less children. They have the possibility to use contraception. They don’t eat children because they consider it to be required to deal with limited resources. It’s not, they have countless other ways for that, and humans offer to take care of it too. They do it because they consider it to be ethical for itself. It did emerge from a resource scarcity issue. Like most of our ethics did emerge from survival/reproductive fitness reasons. But it converged towards a totally different ethical framework than the one of humans. Which is the point of the babyeaters.
There is enough food for all starving human children. The existence of starving children has much more to do with corruption than with production.
This isn’t the prisoners dilemma since the there are three options: continue with deflectors down, put defectors up, or attack. More importantly, putting up one’s defectors does not hurt the opponent like it would in the prisoners dilemma. Also, the humans putting their deflectors has the same effect on their safety as attacking since they can destroy the other ship or block their attacks. More importantly, about the aliens, just because a large group made the mistake doesn’t mean it was reasonable. Look at the Nazis, the Russian Communist under Stalin, the Romans and their killings, the Aztecs, and the Salem Witch trials. There are plenty of times when large groups do things that could easily be figured out to be wrong. Cultist and prostitutes working for pimps are some strong examples IMO. Instead of allowing others to live because they were part of a group their ideas should be considered and if their logic was sound enough, they should be allowed to live.
Well, it’s not as simple. The humans can still destroy the baby-eaters even they have deflectors up, while the baby-eaters can’t destroy the humans if they have deflectors up. So it’s not true that “putting up one’s defectors does not hurt the opponent”. Putting up the deflectors breaks the symmetry, and makes the humans in a position of dominance, while not putting them up is a token of trust and maintains the situation as symmetrical.
This is one of the most stimulating and well written things I’ve read in a while. It’s great.
Hi Hughdo, welcome to Less Wrong.